Jan 24 00:50:03.094155 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 21:38:55 -00 2026 Jan 24 00:50:03.094202 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:50:03.094224 kernel: BIOS-provided physical RAM map: Jan 24 00:50:03.094236 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:50:03.094246 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:50:03.094509 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:50:03.094524 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:50:03.094534 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:50:03.094817 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:50:03.094835 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:50:03.094855 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:50:03.094866 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:50:03.094876 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:50:03.094885 kernel: NX (Execute Disable) protection: active Jan 24 00:50:03.094896 kernel: APIC: Static calls initialized Jan 24 00:50:03.094911 kernel: SMBIOS 2.8 present. Jan 24 00:50:03.095054 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:50:03.095067 kernel: DMI: Memory slots populated: 1/1 Jan 24 00:50:03.095077 kernel: Hypervisor detected: KVM Jan 24 00:50:03.095087 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:50:03.095097 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:50:03.095107 kernel: kvm-clock: using sched offset of 26517802466 cycles Jan 24 00:50:03.095119 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:50:03.095129 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:50:03.095147 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:50:03.095160 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:50:03.095171 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:50:03.095183 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:50:03.095195 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:50:03.095206 kernel: Using GB pages for direct mapping Jan 24 00:50:03.095218 kernel: ACPI: Early table checksum verification disabled Jan 24 00:50:03.095237 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:50:03.096167 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:50:03.096210 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:50:03.096223 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:50:03.096234 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:50:03.097817 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:50:03.097842 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:50:03.097861 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:50:03.097874 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:50:03.097892 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:50:03.097903 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:50:03.097915 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:50:03.097931 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:50:03.097943 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:50:03.097955 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:50:03.097967 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:50:03.097979 kernel: No NUMA configuration found Jan 24 00:50:03.097990 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:50:03.098005 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 24 00:50:03.098018 kernel: Zone ranges: Jan 24 00:50:03.098029 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:50:03.098041 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:50:03.098053 kernel: Normal empty Jan 24 00:50:03.098065 kernel: Device empty Jan 24 00:50:03.098076 kernel: Movable zone start for each node Jan 24 00:50:03.098088 kernel: Early memory node ranges Jan 24 00:50:03.098103 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:50:03.098115 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:50:03.098127 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:50:03.098139 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:50:03.098151 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:50:03.098529 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:50:03.098548 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:50:03.098567 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:50:03.098578 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:50:03.098588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:50:03.098870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:50:03.098889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:50:03.098903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:50:03.098916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:50:03.098934 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:50:03.098944 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:50:03.098955 kernel: TSC deadline timer available Jan 24 00:50:03.098965 kernel: CPU topo: Max. logical packages: 1 Jan 24 00:50:03.098979 kernel: CPU topo: Max. logical dies: 1 Jan 24 00:50:03.098992 kernel: CPU topo: Max. dies per package: 1 Jan 24 00:50:03.099003 kernel: CPU topo: Max. threads per core: 1 Jan 24 00:50:03.099015 kernel: CPU topo: Num. cores per package: 4 Jan 24 00:50:03.099034 kernel: CPU topo: Num. threads per package: 4 Jan 24 00:50:03.099048 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 24 00:50:03.099062 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:50:03.099076 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:50:03.099089 kernel: kvm-guest: setup PV sched yield Jan 24 00:50:03.099101 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:50:03.099114 kernel: Booting paravirtualized kernel on KVM Jan 24 00:50:03.099135 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:50:03.099147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:50:03.099160 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 24 00:50:03.099174 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 24 00:50:03.099187 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:50:03.099200 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:50:03.103935 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:50:03.103981 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:50:03.103994 kernel: random: crng init done Jan 24 00:50:03.104005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:50:03.104018 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:50:03.104029 kernel: Fallback order for Node 0: 0 Jan 24 00:50:03.104041 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 24 00:50:03.104052 kernel: Policy zone: DMA32 Jan 24 00:50:03.104069 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:50:03.104080 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:50:03.104092 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 00:50:03.105444 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 00:50:03.105469 kernel: Dynamic Preempt: voluntary Jan 24 00:50:03.105480 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:50:03.105493 kernel: rcu: RCU event tracing is enabled. Jan 24 00:50:03.105514 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:50:03.105528 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:50:03.105645 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:50:03.105660 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:50:03.105796 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:50:03.105807 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:50:03.105819 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:50:03.105839 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:50:03.105853 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:50:03.105865 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:50:03.105876 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:50:03.105900 kernel: Console: colour VGA+ 80x25 Jan 24 00:50:03.105918 kernel: printk: legacy console [ttyS0] enabled Jan 24 00:50:03.105932 kernel: ACPI: Core revision 20240827 Jan 24 00:50:03.105944 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:50:03.105955 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:50:03.105972 kernel: x2apic enabled Jan 24 00:50:03.105985 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:50:03.106101 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:50:03.106115 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:50:03.106134 kernel: kvm-guest: setup PV IPIs Jan 24 00:50:03.106147 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:50:03.106162 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 00:50:03.106173 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:50:03.106184 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:50:03.106196 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:50:03.106210 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:50:03.106226 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:50:03.106238 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:50:03.106469 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:50:03.106485 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:50:03.106498 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:50:03.106512 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:50:03.106523 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:50:03.106540 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:50:03.106555 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:50:03.106568 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:50:03.106582 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:50:03.106594 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:50:03.106605 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:50:03.106616 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:50:03.106636 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:50:03.106649 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:50:03.109074 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:50:03.109095 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 00:50:03.109108 kernel: landlock: Up and running. Jan 24 00:50:03.109123 kernel: SELinux: Initializing. Jan 24 00:50:03.109134 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:50:03.109152 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:50:03.109465 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:50:03.109481 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:50:03.109493 kernel: signal: max sigframe size: 1776 Jan 24 00:50:03.109505 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:50:03.109518 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:50:03.109530 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 00:50:03.109547 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:50:03.111050 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:50:03.111063 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:50:03.111075 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:50:03.111087 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:50:03.111099 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:50:03.111112 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120520K reserved, 0K cma-reserved) Jan 24 00:50:03.111129 kernel: devtmpfs: initialized Jan 24 00:50:03.111141 kernel: x86/mm: Memory block size: 128MB Jan 24 00:50:03.111153 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:50:03.111165 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:50:03.111177 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:50:03.111189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:50:03.111201 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:50:03.111483 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:50:03.111497 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:50:03.111508 kernel: audit: type=2000 audit(1769215771.282:1): state=initialized audit_enabled=0 res=1 Jan 24 00:50:03.111519 kernel: cpuidle: using governor menu Jan 24 00:50:03.111531 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:50:03.111542 kernel: dca service started, version 1.12.1 Jan 24 00:50:03.111554 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 24 00:50:03.111566 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:50:03.111582 kernel: PCI: Using configuration type 1 for base access Jan 24 00:50:03.111594 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:50:03.111606 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:50:03.111618 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:50:03.111629 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:50:03.111641 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:50:03.111653 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:50:03.112888 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:50:03.112900 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:50:03.112909 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:50:03.112917 kernel: ACPI: Interpreter enabled Jan 24 00:50:03.112925 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:50:03.112933 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:50:03.112941 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:50:03.112954 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:50:03.112962 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:50:03.112970 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:50:03.113586 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:50:03.115487 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:50:03.115829 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:50:03.115849 kernel: PCI host bridge to bus 0000:00 Jan 24 00:50:03.116066 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:50:03.116462 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:50:03.117082 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:50:03.117599 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:50:03.118502 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:50:03.119160 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:50:03.120007 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:50:03.121001 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 00:50:03.121653 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 24 00:50:03.122558 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 24 00:50:03.123241 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 24 00:50:03.124127 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:50:03.124653 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:50:03.125520 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 24414 usecs Jan 24 00:50:03.126241 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 24 00:50:03.126905 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 24 00:50:03.127207 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 24 00:50:03.127833 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:50:03.128119 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 24 00:50:03.128795 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 24 00:50:03.129045 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 24 00:50:03.129548 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:50:03.129958 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 24 00:50:03.130223 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 24 00:50:03.130837 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 24 00:50:03.131119 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:50:03.131593 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:50:03.132001 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 00:50:03.132490 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:50:03.132885 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Jan 24 00:50:03.133459 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 00:50:03.133864 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 24 00:50:03.134149 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 24 00:50:03.134659 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 00:50:03.135060 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 24 00:50:03.135078 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:50:03.135090 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:50:03.135101 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:50:03.135112 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:50:03.135131 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:50:03.135145 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:50:03.135158 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:50:03.135172 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:50:03.135183 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:50:03.135194 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:50:03.135205 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:50:03.135221 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:50:03.135232 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:50:03.135244 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:50:03.135457 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:50:03.135469 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:50:03.135482 kernel: iommu: Default domain type: Translated Jan 24 00:50:03.135496 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:50:03.135512 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:50:03.135523 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:50:03.135535 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:50:03.135546 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:50:03.135951 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:50:03.136226 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:50:03.136861 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:50:03.136887 kernel: vgaarb: loaded Jan 24 00:50:03.136899 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:50:03.136912 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:50:03.136924 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:50:03.136936 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:50:03.136948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:50:03.136960 kernel: pnp: PnP ACPI init Jan 24 00:50:03.137484 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:50:03.137506 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:50:03.137518 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:50:03.137530 kernel: NET: Registered PF_INET protocol family Jan 24 00:50:03.137544 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:50:03.137556 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:50:03.137574 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:50:03.137585 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:50:03.137596 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:50:03.137607 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:50:03.137619 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:50:03.137631 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:50:03.137642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:50:03.137658 kernel: NET: Registered PF_XDP protocol family Jan 24 00:50:03.138037 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:50:03.138490 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:50:03.138867 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:50:03.139112 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:50:03.139575 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:50:03.139959 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:50:03.139984 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:50:03.139996 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 24 00:50:03.140007 kernel: Initialise system trusted keyrings Jan 24 00:50:03.140018 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:50:03.140031 kernel: Key type asymmetric registered Jan 24 00:50:03.140043 kernel: Asymmetric key parser 'x509' registered Jan 24 00:50:03.140055 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 00:50:03.140072 kernel: io scheduler mq-deadline registered Jan 24 00:50:03.140086 kernel: io scheduler kyber registered Jan 24 00:50:03.140097 kernel: io scheduler bfq registered Jan 24 00:50:03.140109 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:50:03.140121 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:50:03.140136 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:50:03.140149 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:50:03.140165 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:50:03.140176 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:50:03.140189 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:50:03.140202 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:50:03.140213 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:50:03.140841 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:50:03.140865 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:50:03.141141 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:50:03.141622 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:49:45 UTC (1769215785) Jan 24 00:50:03.142034 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:50:03.142053 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:50:03.142066 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:50:03.142077 kernel: Segment Routing with IPv6 Jan 24 00:50:03.142089 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:50:03.142107 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:50:03.142119 kernel: Key type dns_resolver registered Jan 24 00:50:03.142130 kernel: IPI shorthand broadcast: enabled Jan 24 00:50:03.142142 kernel: sched_clock: Marking stable (11904127811, 2209073347)->(16655662209, -2542461051) Jan 24 00:50:03.142153 kernel: registered taskstats version 1 Jan 24 00:50:03.142165 kernel: Loading compiled-in X.509 certificates Jan 24 00:50:03.142177 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 08600fac738f210e3b32f727339edfe2b1af2e3d' Jan 24 00:50:03.142191 kernel: Demotion targets for Node 0: null Jan 24 00:50:03.142203 kernel: Key type .fscrypt registered Jan 24 00:50:03.142215 kernel: Key type fscrypt-provisioning registered Jan 24 00:50:03.142229 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:50:03.142240 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:50:03.142452 kernel: ima: No architecture policies found Jan 24 00:50:03.142466 kernel: clk: Disabling unused clocks Jan 24 00:50:03.142483 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 24 00:50:03.142495 kernel: Write protecting the kernel read-only data: 47104k Jan 24 00:50:03.142507 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 00:50:03.142519 kernel: Run /init as init process Jan 24 00:50:03.142530 kernel: with arguments: Jan 24 00:50:03.142542 kernel: /init Jan 24 00:50:03.142553 kernel: with environment: Jan 24 00:50:03.142568 kernel: HOME=/ Jan 24 00:50:03.142579 kernel: TERM=linux Jan 24 00:50:03.142591 kernel: hrtimer: interrupt took 5355568 ns Jan 24 00:50:03.142602 kernel: SCSI subsystem initialized Jan 24 00:50:03.142613 kernel: libata version 3.00 loaded. Jan 24 00:50:03.143010 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:50:03.143030 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:50:03.143506 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 00:50:03.143905 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 00:50:03.144185 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:50:03.144844 kernel: scsi host0: ahci Jan 24 00:50:03.145181 kernel: scsi host1: ahci Jan 24 00:50:03.147034 kernel: scsi host2: ahci Jan 24 00:50:03.148451 kernel: scsi host3: ahci Jan 24 00:50:03.149201 kernel: scsi host4: ahci Jan 24 00:50:03.149839 kernel: scsi host5: ahci Jan 24 00:50:03.149864 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 24 00:50:03.149877 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 24 00:50:03.149896 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 24 00:50:03.149911 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 24 00:50:03.149923 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 24 00:50:03.149934 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 24 00:50:03.149949 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:50:03.149961 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:50:03.149972 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:50:03.149991 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:50:03.150002 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:50:03.150013 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:50:03.150028 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 00:50:03.150039 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:50:03.150050 kernel: ata3.00: applying bridge limits Jan 24 00:50:03.150064 kernel: ata3.00: LPM support broken, forcing max_power Jan 24 00:50:03.150081 kernel: ata3.00: configured for UDMA/100 Jan 24 00:50:03.150792 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:50:03.151128 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:50:03.151831 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:50:03.151853 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:50:03.152154 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 24 00:50:03.152844 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:50:03.152864 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:50:03.152881 kernel: GPT:16515071 != 27000831 Jan 24 00:50:03.152894 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:50:03.152912 kernel: GPT:16515071 != 27000831 Jan 24 00:50:03.152924 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:50:03.152938 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:50:03.152956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:50:03.152969 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:50:03.152982 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 00:50:03.152995 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 00:50:03.153008 kernel: raid6: avx2x4 gen() 6736 MB/s Jan 24 00:50:03.153021 kernel: raid6: avx2x2 gen() 12079 MB/s Jan 24 00:50:03.153034 kernel: raid6: avx2x1 gen() 10157 MB/s Jan 24 00:50:03.153051 kernel: raid6: using algorithm avx2x2 gen() 12079 MB/s Jan 24 00:50:03.153064 kernel: raid6: .... xor() 11825 MB/s, rmw enabled Jan 24 00:50:03.153077 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:50:03.153090 kernel: xor: automatically using best checksumming function avx Jan 24 00:50:03.153110 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:50:03.153124 kernel: BTRFS: device fsid 091bfa4a-922a-4e6e-abc1-a4b74083975f devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (180) Jan 24 00:50:03.153137 kernel: BTRFS info (device dm-0): first mount of filesystem 091bfa4a-922a-4e6e-abc1-a4b74083975f Jan 24 00:50:03.153150 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:50:03.153163 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:50:03.153176 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 00:50:03.153189 kernel: loop: module loaded Jan 24 00:50:03.153206 kernel: loop0: detected capacity change from 0 to 100560 Jan 24 00:50:03.153219 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:50:03.153234 systemd[1]: Successfully made /usr/ read-only. Jan 24 00:50:03.153456 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:50:03.153475 systemd[1]: Detected virtualization kvm. Jan 24 00:50:03.153489 systemd[1]: Detected architecture x86-64. Jan 24 00:50:03.153508 systemd[1]: Running in initrd. Jan 24 00:50:03.153520 systemd[1]: No hostname configured, using default hostname. Jan 24 00:50:03.153537 systemd[1]: Hostname set to . Jan 24 00:50:03.153548 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:50:03.153560 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:50:03.153572 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:50:03.153585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:50:03.153605 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:50:03.153620 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:50:03.153632 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:50:03.153645 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:50:03.153660 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:50:03.153796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:50:03.153808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:50:03.153823 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:50:03.153836 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:50:03.153848 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:50:03.153861 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:50:03.153877 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:50:03.153895 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:50:03.153907 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:50:03.153920 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:50:03.153934 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:50:03.153946 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 00:50:03.153958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:50:03.153972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:50:03.153990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:50:03.154004 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:50:03.154018 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:50:03.154031 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:50:03.154045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:50:03.154059 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:50:03.154078 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 00:50:03.154092 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:50:03.154106 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:50:03.154119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:50:03.154134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:50:03.154152 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:50:03.154167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:50:03.154180 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:50:03.154195 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:50:03.154567 systemd-journald[315]: Collecting audit messages is enabled. Jan 24 00:50:03.154610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:50:03.154624 kernel: Bridge firewalling registered Jan 24 00:50:03.154637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:50:03.154651 systemd-journald[315]: Journal started Jan 24 00:50:03.154801 systemd-journald[315]: Runtime Journal (/run/log/journal/0914d799df074f7d9186f89f483d1458) is 6M, max 48.2M, 42.1M free. Jan 24 00:50:03.088232 systemd-modules-load[318]: Inserted module 'br_netfilter' Jan 24 00:50:04.945862 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:50:04.946585 kernel: audit: type=1130 audit(1769215804.760:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:04.953526 kernel: audit: type=1130 audit(1769215804.915:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:04.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:04.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.036168 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:50:05.118479 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:50:05.179211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:50:05.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.330139 kernel: audit: type=1130 audit(1769215805.240:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.343151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:50:05.472529 kernel: audit: type=1130 audit(1769215805.395:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.478114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:50:05.498911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:50:05.641545 systemd-tmpfiles[329]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 00:50:05.693579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:50:05.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.791499 kernel: audit: type=1130 audit(1769215805.697:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.832635 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:50:05.927559 kernel: audit: type=1130 audit(1769215805.860:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.927605 kernel: audit: type=1334 audit(1769215805.872:8): prog-id=6 op=LOAD Jan 24 00:50:05.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:05.872000 audit: BPF prog-id=6 op=LOAD Jan 24 00:50:05.927145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:50:06.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:06.008875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:50:06.122900 kernel: audit: type=1130 audit(1769215806.024:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:06.248944 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:50:06.443156 kernel: audit: type=1130 audit(1769215806.256:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:06.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:06.439668 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:50:06.720480 dracut-cmdline[358]: dracut-109 Jan 24 00:50:06.786166 dracut-cmdline[358]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:50:07.275974 systemd-resolved[347]: Positive Trust Anchors: Jan 24 00:50:07.276100 systemd-resolved[347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:50:07.276107 systemd-resolved[347]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:50:07.276149 systemd-resolved[347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:50:07.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:07.615559 systemd-resolved[347]: Defaulting to hostname 'linux'. Jan 24 00:50:07.835883 kernel: audit: type=1130 audit(1769215807.720:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:07.639965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:50:07.722937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:50:09.329646 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:50:09.504462 kernel: iscsi: registered transport (tcp) Jan 24 00:50:09.683422 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:50:09.683490 kernel: QLogic iSCSI HBA Driver Jan 24 00:50:10.075163 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:50:10.319913 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:50:10.434645 kernel: audit: type=1130 audit(1769215810.342:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:10.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:10.350235 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:50:10.953885 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:50:11.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.036951 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:50:11.154620 kernel: audit: type=1130 audit(1769215811.008:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.194937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:50:11.607692 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:50:11.844550 kernel: audit: type=1130 audit(1769215811.611:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.844613 kernel: audit: type=1334 audit(1769215811.633:15): prog-id=7 op=LOAD Jan 24 00:50:11.844648 kernel: audit: type=1334 audit(1769215811.633:16): prog-id=8 op=LOAD Jan 24 00:50:11.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.633000 audit: BPF prog-id=7 op=LOAD Jan 24 00:50:11.633000 audit: BPF prog-id=8 op=LOAD Jan 24 00:50:11.691189 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:50:12.430071 systemd-udevd[570]: Using default interface naming scheme 'v257'. Jan 24 00:50:12.564539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:50:12.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:12.742602 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:50:12.832697 kernel: audit: type=1130 audit(1769215812.730:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:13.131680 dracut-pre-trigger[598]: rd.md=0: removing MD RAID activation Jan 24 00:50:14.180702 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:50:14.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:14.368206 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:50:14.431455 kernel: audit: type=1130 audit(1769215814.332:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:15.475908 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:50:15.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:15.534525 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:50:15.671096 kernel: audit: type=1130 audit(1769215815.532:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:15.671132 kernel: audit: type=1130 audit(1769215815.604:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:15.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:15.672695 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:50:15.715000 audit: BPF prog-id=9 op=LOAD Jan 24 00:50:15.729080 kernel: audit: type=1334 audit(1769215815.715:21): prog-id=9 op=LOAD Jan 24 00:50:15.732612 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:50:16.621174 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:50:16.702665 systemd-networkd[739]: lo: Link UP Jan 24 00:50:16.702672 systemd-networkd[739]: lo: Gained carrier Jan 24 00:50:16.717673 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:50:17.056562 kernel: audit: type=1130 audit(1769215816.877:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:16.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:17.034676 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:50:17.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:17.222718 kernel: audit: type=1130 audit(1769215817.159:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:17.371042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:50:17.523497 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:50:17.541732 systemd[1]: Reached target network.target - Network. Jan 24 00:50:17.541953 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:50:17.542013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:50:17.542058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:50:17.693600 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:50:18.460712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:50:18.552070 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:50:18.614092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:50:18.899966 disk-uuid[765]: Primary Header is updated. Jan 24 00:50:18.899966 disk-uuid[765]: Secondary Entries is updated. Jan 24 00:50:18.899966 disk-uuid[765]: Secondary Header is updated. Jan 24 00:50:19.238589 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:50:19.597980 kernel: audit: type=1130 audit(1769215819.310:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:19.598148 kernel: AES CTR mode by8 optimization enabled Jan 24 00:50:19.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:19.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:19.849024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:50:19.985937 kernel: audit: type=1131 audit(1769215819.876:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:19.849603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:50:19.877958 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:50:20.006158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:50:20.302510 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 00:50:21.143045 disk-uuid[767]: Warning: The kernel is still using the old partition table. Jan 24 00:50:21.143045 disk-uuid[767]: The new table will be used at the next reboot or after you Jan 24 00:50:21.143045 disk-uuid[767]: run partprobe(8) or kpartx(8) Jan 24 00:50:21.143045 disk-uuid[767]: The operation has completed successfully. Jan 24 00:50:24.628671 kernel: audit: type=1130 audit(1769215824.459:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:24.628723 kernel: audit: type=1131 audit(1769215824.459:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:24.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:24.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:21.242137 systemd-networkd[739]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:50:24.887691 kernel: audit: type=1130 audit(1769215824.754:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:24.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:21.242145 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:50:21.406102 systemd-networkd[739]: eth0: Link UP Jan 24 00:50:21.408894 systemd-networkd[739]: eth0: Gained carrier Jan 24 00:50:21.408923 systemd-networkd[739]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:50:21.523897 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:50:22.198155 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:50:22.198965 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:50:22.496725 systemd-networkd[739]: eth0: Gained IPv6LL Jan 24 00:50:24.612189 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:50:24.788032 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:50:25.907696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (855) Jan 24 00:50:25.908133 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:50:25.996684 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:50:26.183072 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:50:26.183166 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:50:26.358118 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:50:26.431153 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:50:26.532139 kernel: audit: type=1130 audit(1769215826.464:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:26.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:26.489548 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:50:33.147579 ignition[875]: Ignition 2.24.0 Jan 24 00:50:33.228731 ignition[875]: Stage: fetch-offline Jan 24 00:50:33.297194 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:50:33.319006 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:50:33.323943 ignition[875]: parsed url from cmdline: "" Jan 24 00:50:33.323955 ignition[875]: no config URL provided Jan 24 00:50:33.400985 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:50:33.401213 ignition[875]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:50:33.426975 ignition[875]: op(1): [started] loading QEMU firmware config module Jan 24 00:50:33.427513 ignition[875]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:50:33.951157 ignition[875]: op(1): [finished] loading QEMU firmware config module Jan 24 00:50:37.090544 ignition[875]: parsing config with SHA512: 874a6b361aff8c78c03d34f1e261434a068fc5b622777eb48ff1676a642d85a269149342a4d5b9d89fe9991d04cef97efc7b83db6f3ccf268fd675eb591529ce Jan 24 00:50:37.393016 unknown[875]: fetched base config from "system" Jan 24 00:50:37.393041 unknown[875]: fetched user config from "qemu" Jan 24 00:50:37.412520 ignition[875]: fetch-offline: fetch-offline passed Jan 24 00:50:37.414157 ignition[875]: Ignition finished successfully Jan 24 00:50:37.453527 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:50:37.604065 kernel: audit: type=1130 audit(1769215837.526:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:37.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:37.532740 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:50:37.550145 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:50:39.236767 ignition[886]: Ignition 2.24.0 Jan 24 00:50:39.237195 ignition[886]: Stage: kargs Jan 24 00:50:39.240665 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:50:39.240680 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:50:39.255767 ignition[886]: kargs: kargs passed Jan 24 00:50:39.256044 ignition[886]: Ignition finished successfully Jan 24 00:50:39.430227 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:50:39.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:39.457023 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:50:39.565515 kernel: audit: type=1130 audit(1769215839.436:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:40.289776 ignition[895]: Ignition 2.24.0 Jan 24 00:50:40.291138 ignition[895]: Stage: disks Jan 24 00:50:40.359236 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:50:40.359639 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:50:40.404971 ignition[895]: disks: disks passed Jan 24 00:50:40.405837 ignition[895]: Ignition finished successfully Jan 24 00:50:40.422632 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:50:40.457216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:50:40.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:40.474537 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:50:40.545004 kernel: audit: type=1130 audit(1769215840.450:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:40.601074 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:50:40.613200 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:50:40.635544 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:50:40.725588 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:50:41.224389 systemd-fsck[905]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 24 00:50:41.252828 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:50:41.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:41.325753 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:50:41.402849 kernel: audit: type=1130 audit(1769215841.309:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:43.543136 kernel: EXT4-fs (vda9): mounted filesystem 4e30a7d6-83d2-471c-98e0-68a57c0656af r/w with ordered data mode. Quota mode: none. Jan 24 00:50:43.551585 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:50:43.604187 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:50:43.707019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:50:43.842246 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:50:43.864140 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:50:43.990803 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 24 00:50:43.864209 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:50:44.107723 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:50:44.107764 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:50:43.864498 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:50:44.029234 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:50:44.126618 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:50:44.318585 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:50:44.320666 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:50:44.333683 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:50:47.343504 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:50:47.445717 kernel: audit: type=1130 audit(1769215847.374:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:47.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:47.410612 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:50:47.506625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:50:47.645224 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:50:47.696181 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:50:47.957727 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:50:48.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:48.093084 kernel: audit: type=1130 audit(1769215848.020:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:48.427148 ignition[1011]: INFO : Ignition 2.24.0 Jan 24 00:50:48.427148 ignition[1011]: INFO : Stage: mount Jan 24 00:50:48.427148 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:50:48.427148 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:50:48.605591 ignition[1011]: INFO : mount: mount passed Jan 24 00:50:48.605591 ignition[1011]: INFO : Ignition finished successfully Jan 24 00:50:48.636823 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:50:48.752240 kernel: audit: type=1130 audit(1769215848.670:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:48.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:48.679719 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:50:48.846073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:50:49.026751 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Jan 24 00:50:49.058153 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:50:49.078142 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:50:49.209051 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:50:49.209137 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:50:49.222828 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:50:49.918710 ignition[1041]: INFO : Ignition 2.24.0 Jan 24 00:50:49.918710 ignition[1041]: INFO : Stage: files Jan 24 00:50:49.957522 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:50:49.957522 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:50:50.157095 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:50:50.208381 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:50:50.208381 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:50:50.326428 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:50:50.377889 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:50:50.377889 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:50:50.348815 unknown[1041]: wrote ssh authorized keys file for user: core Jan 24 00:50:50.488032 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:50:50.488032 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:50:51.637074 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:50:57.542176 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:50:57.606852 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:50:57.676692 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:50:57.676692 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:50:57.813087 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:50:59.098696 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:51:00.307854 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1330164098 wd_nsec: 1330164027 Jan 24 00:51:12.441481 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:51:12.464538 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:51:12.464538 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:51:12.464538 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:51:12.464538 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:51:12.464538 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:51:12.709146 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:51:12.709146 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:51:12.709146 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:51:12.709146 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:51:12.905364 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:51:13.068535 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:51:13.151889 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:51:13.198982 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:51:13.248198 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:51:13.268700 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:51:13.268700 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:51:13.268700 ignition[1041]: INFO : files: files passed Jan 24 00:51:13.268700 ignition[1041]: INFO : Ignition finished successfully Jan 24 00:51:13.403594 kernel: audit: type=1130 audit(1769215873.351:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.333741 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:51:13.406795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:51:13.457623 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:51:13.512205 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:51:13.553409 kernel: audit: type=1130 audit(1769215873.520:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.553447 kernel: audit: type=1131 audit(1769215873.520:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.512630 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:51:13.621910 initrd-setup-root-after-ignition[1074]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:51:13.647537 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:51:13.647537 initrd-setup-root-after-ignition[1076]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:51:13.680404 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:51:13.741172 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:51:13.812083 kernel: audit: type=1130 audit(1769215873.754:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:13.758622 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:51:13.970561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:51:14.219010 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:51:14.219680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:51:14.278738 kernel: audit: type=1130 audit(1769215874.235:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.278786 kernel: audit: type=1131 audit(1769215874.235:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.257911 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:51:14.280217 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:51:14.332904 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:51:14.340472 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:51:14.532830 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:51:14.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.576439 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:51:14.644135 kernel: audit: type=1130 audit(1769215874.563:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.695207 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:51:14.701229 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:51:14.712564 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:51:14.761636 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:51:14.776211 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:51:14.776596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:51:14.843341 kernel: audit: type=1131 audit(1769215874.797:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.798527 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:51:14.826868 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:51:14.830850 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:51:15.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.830990 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:51:15.100849 kernel: audit: type=1131 audit(1769215875.038:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:14.831215 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:51:14.831485 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:51:14.831615 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:51:14.834678 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:51:14.834821 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:51:14.849725 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:51:14.855481 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:51:14.855879 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:51:14.856573 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:51:15.178424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:51:15.210602 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:51:15.227591 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:51:15.236463 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:51:15.250674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:51:15.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.251191 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:51:15.351992 kernel: audit: type=1131 audit(1769215875.269:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.296635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:51:15.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.297721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:51:15.347934 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:51:15.365537 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:51:15.368747 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:51:15.399881 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:51:15.400238 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:51:15.418799 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:51:15.422981 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:51:15.497829 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:51:15.497939 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:51:15.505676 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 00:51:15.505863 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:51:15.558896 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:51:15.561132 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:51:15.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.603511 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:51:15.605573 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:51:15.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.659379 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:51:15.671573 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:51:15.711963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:51:15.722787 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:51:15.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.746465 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:51:15.753620 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:51:15.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.777757 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:51:15.794239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:51:15.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.826727 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:51:15.844560 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:51:15.850557 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:51:15.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.905508 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:51:15.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:15.932649 ignition[1100]: INFO : Ignition 2.24.0 Jan 24 00:51:15.932649 ignition[1100]: INFO : Stage: umount Jan 24 00:51:15.932649 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:51:15.932649 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:51:15.909452 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:51:15.969838 ignition[1100]: INFO : umount: umount passed Jan 24 00:51:15.969838 ignition[1100]: INFO : Ignition finished successfully Jan 24 00:51:15.998691 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:51:15.998947 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:51:16.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.022651 systemd[1]: Stopped target network.target - Network. Jan 24 00:51:16.029361 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:51:16.029454 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:51:16.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.079108 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:51:16.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.079483 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:51:16.116383 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:51:16.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.116602 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:51:16.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.284157 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:51:16.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.284647 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:51:16.309561 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:51:16.309760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:51:16.325844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:51:16.338357 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:51:16.375786 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:51:16.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.376837 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:51:16.406773 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 00:51:16.422000 audit: BPF prog-id=9 op=UNLOAD Jan 24 00:51:16.438404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:51:16.438563 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:51:16.445799 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:51:16.523786 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:51:16.524993 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:51:16.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.545935 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:51:16.555153 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:51:16.617877 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:51:16.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.647756 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:51:16.656920 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:51:16.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.672000 audit: BPF prog-id=6 op=UNLOAD Jan 24 00:51:16.674542 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:51:16.675579 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:51:16.702480 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:51:16.702625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:51:16.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.717741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:51:16.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.717949 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:51:16.726437 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:51:16.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.726535 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:51:16.745600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:51:16.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.745749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:51:16.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.764178 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:51:16.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.771744 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 00:51:16.771869 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:51:16.784467 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:51:16.784602 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:51:16.804164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:51:16.804367 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:51:16.809378 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:51:16.810191 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:51:16.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.915713 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:51:16.915950 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:51:16.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.929115 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:51:16.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.929232 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:51:16.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:16.958933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:51:16.959503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:51:17.049797 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:51:17.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:17.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:17.050124 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:51:17.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:17.059100 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:51:17.060975 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:51:17.069670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:51:17.075868 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:51:17.147209 systemd[1]: Switching root. Jan 24 00:51:17.200458 systemd-journald[315]: Journal stopped Jan 24 00:51:22.304802 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Jan 24 00:51:22.305393 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:51:22.305486 kernel: SELinux: policy capability open_perms=1 Jan 24 00:51:22.305573 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:51:22.305596 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:51:22.305618 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:51:22.305640 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:51:22.305743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:51:22.305773 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:51:22.305793 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 00:51:22.305821 systemd[1]: Successfully loaded SELinux policy in 186.003ms. Jan 24 00:51:22.305926 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 34.377ms. Jan 24 00:51:22.305955 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:51:22.305980 systemd[1]: Detected virtualization kvm. Jan 24 00:51:22.306162 systemd[1]: Detected architecture x86-64. Jan 24 00:51:22.306191 systemd[1]: Detected first boot. Jan 24 00:51:22.306215 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:51:22.306239 zram_generator::config[1146]: No configuration found. Jan 24 00:51:22.306519 kernel: Guest personality initialized and is inactive Jan 24 00:51:22.306544 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 00:51:22.306574 kernel: Initialized host personality Jan 24 00:51:22.306731 kernel: NET: Registered PF_VSOCK protocol family Jan 24 00:51:22.306756 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:51:22.306780 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 24 00:51:22.306803 kernel: audit: type=1334 audit(1769215880.511:87): prog-id=12 op=LOAD Jan 24 00:51:22.306924 kernel: audit: type=1334 audit(1769215880.511:88): prog-id=3 op=UNLOAD Jan 24 00:51:22.306948 kernel: audit: type=1334 audit(1769215880.511:89): prog-id=13 op=LOAD Jan 24 00:51:22.307034 kernel: audit: type=1334 audit(1769215880.511:90): prog-id=14 op=LOAD Jan 24 00:51:22.307190 kernel: audit: type=1334 audit(1769215880.511:91): prog-id=4 op=UNLOAD Jan 24 00:51:22.307217 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:51:22.307236 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:51:22.307366 kernel: audit: type=1334 audit(1769215880.511:92): prog-id=5 op=UNLOAD Jan 24 00:51:22.307390 kernel: audit: type=1131 audit(1769215880.514:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.307413 kernel: audit: type=1130 audit(1769215880.617:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.307436 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:51:22.307580 kernel: audit: type=1131 audit(1769215880.617:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.307612 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:51:22.307632 kernel: audit: type=1334 audit(1769215880.679:96): prog-id=12 op=UNLOAD Jan 24 00:51:22.307654 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:51:22.307679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:51:22.307703 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:51:22.307798 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:51:22.307820 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:51:22.307839 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:51:22.307921 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:51:22.307950 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:51:22.307974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:51:22.308208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:51:22.308239 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:51:22.308391 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:51:22.308420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:51:22.308445 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:51:22.308468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:51:22.308558 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:51:22.308658 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:51:22.308688 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:51:22.308710 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:51:22.308730 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:51:22.308753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:51:22.308775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:51:22.308796 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 00:51:22.308901 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:51:22.308927 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:51:22.308949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:51:22.308970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:51:22.309049 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 00:51:22.309147 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:51:22.309168 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 00:51:22.309376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:51:22.309402 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 00:51:22.309424 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 00:51:22.309446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:51:22.309526 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:51:22.309547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:51:22.309569 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:51:22.309592 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:51:22.309681 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:51:22.309707 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:51:22.309729 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:51:22.309752 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:51:22.309774 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:51:22.309795 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:51:22.309889 systemd[1]: Reached target machines.target - Containers. Jan 24 00:51:22.309918 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:51:22.309940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:51:22.309961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:51:22.309980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:51:22.309999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:51:22.310020 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:51:22.310136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:51:22.310158 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:51:22.310184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:51:22.310209 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:51:22.310232 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:51:22.310368 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:51:22.310395 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:51:22.310540 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:51:22.310563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:51:22.310585 kernel: fuse: init (API version 7.41) Jan 24 00:51:22.310686 kernel: ACPI: bus type drm_connector registered Jan 24 00:51:22.310710 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:51:22.310728 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:51:22.310752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:51:22.310771 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:51:22.310791 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 00:51:22.310852 systemd-journald[1232]: Collecting audit messages is enabled. Jan 24 00:51:22.311141 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:51:22.311528 systemd-journald[1232]: Journal started Jan 24 00:51:22.311634 systemd-journald[1232]: Runtime Journal (/run/log/journal/0914d799df074f7d9186f89f483d1458) is 6M, max 48.2M, 42.1M free. Jan 24 00:51:21.433000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 24 00:51:22.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.096000 audit: BPF prog-id=14 op=UNLOAD Jan 24 00:51:22.096000 audit: BPF prog-id=13 op=UNLOAD Jan 24 00:51:22.119000 audit: BPF prog-id=15 op=LOAD Jan 24 00:51:22.121000 audit: BPF prog-id=16 op=LOAD Jan 24 00:51:22.123000 audit: BPF prog-id=17 op=LOAD Jan 24 00:51:22.295000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 00:51:22.295000 audit[1232]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe783b9c20 a2=4000 a3=0 items=0 ppid=1 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:51:22.295000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 00:51:20.475865 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:51:20.514491 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:51:20.516692 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:51:20.517705 systemd[1]: systemd-journald.service: Consumed 5.473s CPU time. Jan 24 00:51:22.339239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:51:22.349663 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:51:22.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.361926 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:51:22.371861 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:51:22.396518 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:51:22.403918 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:51:22.412703 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:51:22.423779 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:51:22.435588 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:51:22.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.450905 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:51:22.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.467751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:51:22.468208 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:51:22.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.479948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:51:22.518223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:51:22.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.551424 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:51:22.553235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:51:22.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.569851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:51:22.570590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:51:22.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.620858 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:51:22.621597 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:51:22.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.646522 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:51:22.648513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:51:22.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.676594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:51:22.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.710782 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:51:22.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.729982 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:51:22.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.743165 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 00:51:22.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:22.796727 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:51:22.810040 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 00:51:22.833516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:51:22.848528 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:51:22.876526 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:51:22.876681 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:51:22.926728 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 00:51:22.957687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:51:22.958004 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:51:22.969654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:51:23.132965 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:51:23.155372 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:51:23.163452 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:51:23.171878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:51:23.178548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:51:23.210695 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:51:23.213722 systemd-journald[1232]: Time spent on flushing to /var/log/journal/0914d799df074f7d9186f89f483d1458 is 187.317ms for 1132 entries. Jan 24 00:51:23.213722 systemd-journald[1232]: System Journal (/var/log/journal/0914d799df074f7d9186f89f483d1458) is 8M, max 163.5M, 155.5M free. Jan 24 00:51:23.441907 systemd-journald[1232]: Received client request to flush runtime journal. Jan 24 00:51:23.441978 kernel: loop1: detected capacity change from 0 to 111560 Jan 24 00:51:23.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:23.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:23.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:23.230550 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:51:23.245595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:51:23.259185 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:51:23.269755 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:51:23.276912 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:51:23.312960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:51:23.410235 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:51:23.424986 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 00:51:23.445669 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:51:23.453015 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 24 00:51:23.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:23.453033 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 24 00:51:23.468383 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:51:23.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:23.485755 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:51:23.543426 kernel: loop2: detected capacity change from 0 to 50784 Jan 24 00:51:23.611535 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:51:23.617494 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 00:51:23.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:23.676965 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 00:51:23.747049 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:51:23.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:23.762000 audit: BPF prog-id=18 op=LOAD Jan 24 00:51:23.762000 audit: BPF prog-id=19 op=LOAD Jan 24 00:51:23.762000 audit: BPF prog-id=20 op=LOAD Jan 24 00:51:23.764587 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 00:51:23.779000 audit: BPF prog-id=21 op=LOAD Jan 24 00:51:23.781759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:51:23.794565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:51:23.815000 audit: BPF prog-id=22 op=LOAD Jan 24 00:51:23.816000 audit: BPF prog-id=23 op=LOAD Jan 24 00:51:23.816000 audit: BPF prog-id=24 op=LOAD Jan 24 00:51:23.829947 kernel: loop4: detected capacity change from 0 to 111560 Jan 24 00:51:23.832794 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:51:23.875000 audit: BPF prog-id=25 op=LOAD Jan 24 00:51:23.933000 audit: BPF prog-id=26 op=LOAD Jan 24 00:51:23.934000 audit: BPF prog-id=27 op=LOAD Jan 24 00:51:23.937984 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 00:51:24.039856 kernel: loop5: detected capacity change from 0 to 50784 Jan 24 00:51:24.101447 kernel: loop6: detected capacity change from 0 to 224512 Jan 24 00:51:24.107205 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 24 00:51:24.107225 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 24 00:51:24.121889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:51:24.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:24.208505 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 24 00:51:24.210497 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:51:24.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:24.215913 systemd-nsresourced[1294]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 00:51:24.229810 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 00:51:24.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:24.628460 (sd-merge)[1292]: Merged extensions into '/usr'. Jan 24 00:51:24.642183 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:51:24.642201 systemd[1]: Reloading... Jan 24 00:51:25.177401 zram_generator::config[1340]: No configuration found. Jan 24 00:51:25.251060 systemd-oomd[1289]: No swap; memory pressure usage will be degraded Jan 24 00:51:25.267674 systemd-resolved[1290]: Positive Trust Anchors: Jan 24 00:51:25.267700 systemd-resolved[1290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:51:25.267708 systemd-resolved[1290]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:51:25.267755 systemd-resolved[1290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:51:25.317551 systemd-resolved[1290]: Defaulting to hostname 'linux'. Jan 24 00:51:25.933863 systemd[1]: Reloading finished in 1290 ms. Jan 24 00:51:25.999454 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 00:51:26.014541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:51:26.062409 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 24 00:51:26.062735 kernel: audit: type=1130 audit(1769215886.011:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:26.062776 kernel: audit: type=1130 audit(1769215886.055:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:26.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:26.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:26.063391 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:51:26.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:26.214559 kernel: audit: type=1130 audit(1769215886.164:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:26.306561 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:51:26.358667 systemd[1]: Starting ensure-sysext.service... Jan 24 00:51:26.369630 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:51:26.409592 kernel: audit: type=1334 audit(1769215886.391:148): prog-id=28 op=LOAD Jan 24 00:51:26.391000 audit: BPF prog-id=28 op=LOAD Jan 24 00:51:26.391000 audit: BPF prog-id=22 op=UNLOAD Jan 24 00:51:26.391000 audit: BPF prog-id=29 op=LOAD Jan 24 00:51:26.391000 audit: BPF prog-id=30 op=LOAD Jan 24 00:51:26.391000 audit: BPF prog-id=23 op=UNLOAD Jan 24 00:51:26.425843 kernel: audit: type=1334 audit(1769215886.391:149): prog-id=22 op=UNLOAD Jan 24 00:51:26.425917 kernel: audit: type=1334 audit(1769215886.391:150): prog-id=29 op=LOAD Jan 24 00:51:26.425956 kernel: audit: type=1334 audit(1769215886.391:151): prog-id=30 op=LOAD Jan 24 00:51:26.426001 kernel: audit: type=1334 audit(1769215886.391:152): prog-id=23 op=UNLOAD Jan 24 00:51:26.391000 audit: BPF prog-id=24 op=UNLOAD Jan 24 00:51:26.392000 audit: BPF prog-id=31 op=LOAD Jan 24 00:51:26.392000 audit: BPF prog-id=15 op=UNLOAD Jan 24 00:51:26.392000 audit: BPF prog-id=32 op=LOAD Jan 24 00:51:26.392000 audit: BPF prog-id=33 op=LOAD Jan 24 00:51:26.392000 audit: BPF prog-id=16 op=UNLOAD Jan 24 00:51:26.392000 audit: BPF prog-id=17 op=UNLOAD Jan 24 00:51:26.396000 audit: BPF prog-id=34 op=LOAD Jan 24 00:51:26.396000 audit: BPF prog-id=25 op=UNLOAD Jan 24 00:51:26.397000 audit: BPF prog-id=35 op=LOAD Jan 24 00:51:26.397000 audit: BPF prog-id=36 op=LOAD Jan 24 00:51:26.397000 audit: BPF prog-id=26 op=UNLOAD Jan 24 00:51:26.443440 kernel: audit: type=1334 audit(1769215886.391:153): prog-id=24 op=UNLOAD Jan 24 00:51:26.443503 kernel: audit: type=1334 audit(1769215886.392:154): prog-id=31 op=LOAD Jan 24 00:51:26.397000 audit: BPF prog-id=27 op=UNLOAD Jan 24 00:51:26.398000 audit: BPF prog-id=37 op=LOAD Jan 24 00:51:26.398000 audit: BPF prog-id=21 op=UNLOAD Jan 24 00:51:26.400000 audit: BPF prog-id=38 op=LOAD Jan 24 00:51:26.400000 audit: BPF prog-id=18 op=UNLOAD Jan 24 00:51:26.403000 audit: BPF prog-id=39 op=LOAD Jan 24 00:51:26.403000 audit: BPF prog-id=40 op=LOAD Jan 24 00:51:26.403000 audit: BPF prog-id=19 op=UNLOAD Jan 24 00:51:26.403000 audit: BPF prog-id=20 op=UNLOAD Jan 24 00:51:26.507761 systemd[1]: Reload requested from client PID 1376 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:51:26.507850 systemd[1]: Reloading... Jan 24 00:51:26.648797 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 00:51:26.648862 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 00:51:26.649570 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:51:26.653375 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 24 00:51:26.653642 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 24 00:51:26.973373 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:51:26.973835 systemd-tmpfiles[1377]: Skipping /boot Jan 24 00:51:27.030925 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:51:27.030994 systemd-tmpfiles[1377]: Skipping /boot Jan 24 00:51:27.050375 zram_generator::config[1408]: No configuration found. Jan 24 00:51:28.114483 systemd[1]: Reloading finished in 1604 ms. Jan 24 00:51:28.157871 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:51:28.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.177000 audit: BPF prog-id=41 op=LOAD Jan 24 00:51:28.179000 audit: BPF prog-id=31 op=UNLOAD Jan 24 00:51:28.185000 audit: BPF prog-id=42 op=LOAD Jan 24 00:51:28.185000 audit: BPF prog-id=43 op=LOAD Jan 24 00:51:28.185000 audit: BPF prog-id=32 op=UNLOAD Jan 24 00:51:28.185000 audit: BPF prog-id=33 op=UNLOAD Jan 24 00:51:28.186000 audit: BPF prog-id=44 op=LOAD Jan 24 00:51:28.186000 audit: BPF prog-id=28 op=UNLOAD Jan 24 00:51:28.186000 audit: BPF prog-id=45 op=LOAD Jan 24 00:51:28.186000 audit: BPF prog-id=46 op=LOAD Jan 24 00:51:28.186000 audit: BPF prog-id=29 op=UNLOAD Jan 24 00:51:28.186000 audit: BPF prog-id=30 op=UNLOAD Jan 24 00:51:28.194000 audit: BPF prog-id=47 op=LOAD Jan 24 00:51:28.194000 audit: BPF prog-id=38 op=UNLOAD Jan 24 00:51:28.195000 audit: BPF prog-id=48 op=LOAD Jan 24 00:51:28.200000 audit: BPF prog-id=49 op=LOAD Jan 24 00:51:28.200000 audit: BPF prog-id=39 op=UNLOAD Jan 24 00:51:28.200000 audit: BPF prog-id=40 op=UNLOAD Jan 24 00:51:28.202000 audit: BPF prog-id=50 op=LOAD Jan 24 00:51:28.202000 audit: BPF prog-id=34 op=UNLOAD Jan 24 00:51:28.202000 audit: BPF prog-id=51 op=LOAD Jan 24 00:51:28.202000 audit: BPF prog-id=52 op=LOAD Jan 24 00:51:28.202000 audit: BPF prog-id=35 op=UNLOAD Jan 24 00:51:28.202000 audit: BPF prog-id=36 op=UNLOAD Jan 24 00:51:28.203000 audit: BPF prog-id=53 op=LOAD Jan 24 00:51:28.204000 audit: BPF prog-id=37 op=UNLOAD Jan 24 00:51:28.242671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:51:28.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.306223 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:51:28.317975 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:51:28.346907 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:51:28.379545 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:51:28.393000 audit: BPF prog-id=8 op=UNLOAD Jan 24 00:51:28.394000 audit: BPF prog-id=7 op=UNLOAD Jan 24 00:51:28.395000 audit: BPF prog-id=54 op=LOAD Jan 24 00:51:28.395000 audit: BPF prog-id=55 op=LOAD Jan 24 00:51:28.408661 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:51:28.423586 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:51:28.439410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:51:28.439651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:51:28.454232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:51:28.464619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:51:28.480030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:51:28.501172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:51:28.501652 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:51:28.501872 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:51:28.502005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:51:28.501000 audit[1459]: SYSTEM_BOOT pid=1459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.529035 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:51:28.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.543356 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:51:28.546452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:51:28.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.561632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:51:28.563576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:51:28.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.574964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:51:28.576936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:51:28.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:51:28.608726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:51:28.609045 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Jan 24 00:51:28.609496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:51:28.614516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:51:28.628506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:51:28.646771 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:51:28.658845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:51:28.673692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:51:28.674010 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:51:28.674200 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:51:28.674822 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:51:28.678000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 00:51:28.678000 audit[1484]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe8e5832d0 a2=420 a3=0 items=0 ppid=1448 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:51:28.678000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:51:28.680973 augenrules[1484]: No rules Jan 24 00:51:28.694821 systemd[1]: Finished ensure-sysext.service. Jan 24 00:51:28.701939 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:51:28.704541 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:51:28.715190 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:51:28.727652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:51:28.728178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:51:28.739699 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:51:28.740081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:51:28.748623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:51:28.749417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:51:28.761453 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:51:28.762174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:51:28.782593 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:51:28.994940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:51:29.033620 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:51:29.047647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:51:29.048065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:51:29.060827 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:51:29.073507 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:51:29.748072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:51:29.756435 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:51:29.760752 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:51:30.141870 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:51:30.164988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:51:30.734772 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:51:30.734880 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:51:30.730976 systemd-networkd[1509]: lo: Link UP Jan 24 00:51:30.730988 systemd-networkd[1509]: lo: Gained carrier Jan 24 00:51:30.743555 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:51:30.762028 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:51:30.762044 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:51:30.776008 systemd-networkd[1509]: eth0: Link UP Jan 24 00:51:30.778925 systemd-networkd[1509]: eth0: Gained carrier Jan 24 00:51:30.779785 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:51:30.811041 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:51:30.823639 systemd[1]: Reached target network.target - Network. Jan 24 00:51:30.842698 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:51:30.857493 systemd-networkd[1509]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:51:30.869170 systemd-timesyncd[1512]: Network configuration changed, trying to establish connection. Jan 24 00:51:30.872977 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 00:51:31.666529 systemd-resolved[1290]: Clock change detected. Flushing caches. Jan 24 00:51:31.667059 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:51:31.667227 systemd-timesyncd[1512]: Initial clock synchronization to Sat 2026-01-24 00:51:31.662910 UTC. Jan 24 00:51:31.669546 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:51:31.724170 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 00:51:31.865459 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:51:31.877966 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:51:32.961899 systemd-networkd[1509]: eth0: Gained IPv6LL Jan 24 00:51:32.991369 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:51:33.003201 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:51:33.089520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:51:35.422719 kernel: kvm_amd: TSC scaling supported Jan 24 00:51:35.424783 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:51:35.424916 kernel: kvm_amd: Nested Paging enabled Jan 24 00:51:35.424942 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:51:35.424961 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:51:35.577574 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:51:35.598081 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:51:35.674040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:51:35.722622 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:51:36.402778 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:51:36.414552 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:51:36.416484 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:51:36.426993 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:51:36.436738 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:51:36.466751 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 00:51:36.474905 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:51:36.482350 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:51:36.491758 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 00:51:36.499187 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 00:51:36.506654 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:51:36.515842 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:51:36.516030 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:51:36.526766 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:51:36.535912 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:51:36.578037 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:51:36.608895 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 00:51:36.633868 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 00:51:36.671513 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 00:51:36.712524 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:51:36.759055 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 00:51:36.780947 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:51:36.795024 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:51:36.810062 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:51:36.817983 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:51:36.818201 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:51:36.822830 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:51:36.870612 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:51:36.904555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:51:36.926611 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:51:36.936703 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:51:36.959909 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:51:36.972975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:51:36.985003 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 00:51:37.003368 jq[1569]: false Jan 24 00:51:37.016094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:51:37.058513 extend-filesystems[1570]: Found /dev/vda6 Jan 24 00:51:37.068817 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:51:37.077062 extend-filesystems[1570]: Found /dev/vda9 Jan 24 00:51:37.090536 extend-filesystems[1570]: Checking size of /dev/vda9 Jan 24 00:51:37.088607 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:51:37.107688 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 24 00:51:37.107688 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 24 00:51:37.116693 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:51:37.153024 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 24 00:51:37.154506 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:51:37.154506 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 24 00:51:37.153543 oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 24 00:51:37.153579 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:51:37.153657 oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 24 00:51:37.156193 extend-filesystems[1570]: Resized partition /dev/vda9 Jan 24 00:51:37.161685 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:51:37.181504 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 24 00:51:37.181504 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:51:37.181446 oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 24 00:51:37.181469 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:51:37.184402 extend-filesystems[1593]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 00:51:37.189569 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:51:37.204183 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 24 00:51:37.242723 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:51:37.265892 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:51:37.266928 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:51:37.279233 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:51:37.310669 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:51:37.426986 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:51:37.550727 jq[1605]: true Jan 24 00:51:37.440858 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:51:37.457487 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:51:37.462216 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 00:51:37.464801 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 00:51:37.510950 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:51:37.511514 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:51:37.532766 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:51:37.575381 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 24 00:51:37.588690 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:51:37.590890 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:51:37.636392 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:51:37.636392 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:51:37.636392 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 24 00:51:37.705476 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Jan 24 00:51:37.636838 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:51:37.639608 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:51:37.766710 jq[1612]: true Jan 24 00:51:37.767354 update_engine[1599]: I20260124 00:51:37.765513 1599 main.cc:92] Flatcar Update Engine starting Jan 24 00:51:37.782031 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:51:37.782808 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:51:37.808956 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:51:37.878205 tar[1610]: linux-amd64/LICENSE Jan 24 00:51:37.878205 tar[1610]: linux-amd64/helm Jan 24 00:51:37.925031 dbus-daemon[1567]: [system] SELinux support is enabled Jan 24 00:51:37.926616 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:51:37.950116 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:51:37.950644 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:51:37.951534 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:51:37.951567 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:51:37.953507 systemd-logind[1597]: New seat seat0. Jan 24 00:51:37.968371 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:51:37.968415 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:51:37.975438 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:51:37.977580 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:51:37.988539 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:51:37.997518 update_engine[1599]: I20260124 00:51:37.996460 1599 update_check_scheduler.cc:74] Next update check in 11m2s Jan 24 00:51:38.002441 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:51:38.008510 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:51:38.036989 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:51:38.037643 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:51:38.233198 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:51:38.928591 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:51:38.968670 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:51:38.978818 locksmithd[1658]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:51:39.033918 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:51:39.034699 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:51:39.071218 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:51:39.536352 containerd[1614]: time="2026-01-24T00:51:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 00:51:39.563446 containerd[1614]: time="2026-01-24T00:51:39.563382020Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 00:51:39.583231 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:51:39.607979 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:51:39.624962 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:51:39.637044 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:51:39.696571 containerd[1614]: time="2026-01-24T00:51:39.695122476Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.417µs" Jan 24 00:51:39.696571 containerd[1614]: time="2026-01-24T00:51:39.695457752Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 00:51:39.696571 containerd[1614]: time="2026-01-24T00:51:39.695888125Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 00:51:39.696571 containerd[1614]: time="2026-01-24T00:51:39.695917059Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 00:51:39.696947 containerd[1614]: time="2026-01-24T00:51:39.696925422Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 00:51:39.697069 containerd[1614]: time="2026-01-24T00:51:39.696958243Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:51:39.893080 containerd[1614]: time="2026-01-24T00:51:39.891454428Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:51:39.893080 containerd[1614]: time="2026-01-24T00:51:39.891685319Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:51:39.896539 containerd[1614]: time="2026-01-24T00:51:39.893631733Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:51:39.896539 containerd[1614]: time="2026-01-24T00:51:39.893656309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:51:39.896539 containerd[1614]: time="2026-01-24T00:51:39.893752849Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:51:39.896539 containerd[1614]: time="2026-01-24T00:51:39.893764671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:51:39.896539 containerd[1614]: time="2026-01-24T00:51:39.895634492Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:51:39.896539 containerd[1614]: time="2026-01-24T00:51:39.895658707Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 00:51:39.896862 containerd[1614]: time="2026-01-24T00:51:39.896826137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 00:51:39.897826 containerd[1614]: time="2026-01-24T00:51:39.897797570Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:51:39.897963 containerd[1614]: time="2026-01-24T00:51:39.897936750Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:51:39.899584 containerd[1614]: time="2026-01-24T00:51:39.899546605Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 00:51:39.899859 containerd[1614]: time="2026-01-24T00:51:39.899830466Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 00:51:39.900978 containerd[1614]: time="2026-01-24T00:51:39.900946309Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 00:51:39.903054 containerd[1614]: time="2026-01-24T00:51:39.901230890Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:51:39.962410 containerd[1614]: time="2026-01-24T00:51:39.962040657Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965515634Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965821565Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965847203Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965869174Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965888180Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965903889Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965916422Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.965933053Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.966015588Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.966041676Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.966056875Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.966069158Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 00:51:39.966365 containerd[1614]: time="2026-01-24T00:51:39.966084927Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 00:51:39.972406 containerd[1614]: time="2026-01-24T00:51:39.971088080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974487837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974551796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974572054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974618931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974641514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974662132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974735089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974811180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974833131Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.974849101Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.975042973Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.975118554Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 00:51:39.975347 containerd[1614]: time="2026-01-24T00:51:39.975198193Z" level=info msg="Start snapshots syncer" Jan 24 00:51:39.979804 containerd[1614]: time="2026-01-24T00:51:39.979767534Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 00:51:39.985469 containerd[1614]: time="2026-01-24T00:51:39.983850616Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 00:51:39.989339 containerd[1614]: time="2026-01-24T00:51:39.988345248Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 00:51:39.989339 containerd[1614]: time="2026-01-24T00:51:39.988431639Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 00:51:39.989339 containerd[1614]: time="2026-01-24T00:51:39.988701122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 00:51:39.989339 containerd[1614]: time="2026-01-24T00:51:39.988869607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 00:51:39.989339 containerd[1614]: time="2026-01-24T00:51:39.988895264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 00:51:39.989533 containerd[1614]: time="2026-01-24T00:51:39.989447916Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 00:51:39.989730 containerd[1614]: time="2026-01-24T00:51:39.989593588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 00:51:39.989730 containerd[1614]: time="2026-01-24T00:51:39.989615409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 00:51:39.989956 containerd[1614]: time="2026-01-24T00:51:39.989629766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.989996961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990023521Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990230007Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990394734Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990509758Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990532070Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990546738Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990560393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990628220Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990765186Z" level=info msg="runtime interface created" Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990779152Z" level=info msg="created NRI interface" Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990793398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990813396Z" level=info msg="Connect containerd service" Jan 24 00:51:39.992366 containerd[1614]: time="2026-01-24T00:51:39.990974677Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:51:39.995919 containerd[1614]: time="2026-01-24T00:51:39.995619188Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:51:42.032805 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:51:42.081763 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:47548.service - OpenSSH per-connection server daemon (10.0.0.1:47548). Jan 24 00:51:42.500240 containerd[1614]: time="2026-01-24T00:51:42.499440804Z" level=info msg="Start subscribing containerd event" Jan 24 00:51:42.501462 containerd[1614]: time="2026-01-24T00:51:42.500496305Z" level=info msg="Start recovering state" Jan 24 00:51:42.505377 containerd[1614]: time="2026-01-24T00:51:42.499841828Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:51:42.505377 containerd[1614]: time="2026-01-24T00:51:42.504839684Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:51:42.511626 containerd[1614]: time="2026-01-24T00:51:42.511465185Z" level=info msg="Start event monitor" Jan 24 00:51:42.512226 containerd[1614]: time="2026-01-24T00:51:42.512065837Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:51:42.512596 containerd[1614]: time="2026-01-24T00:51:42.512573895Z" level=info msg="Start streaming server" Jan 24 00:51:42.514546 containerd[1614]: time="2026-01-24T00:51:42.514518676Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 00:51:42.519085 containerd[1614]: time="2026-01-24T00:51:42.519053312Z" level=info msg="runtime interface starting up..." Jan 24 00:51:42.520915 containerd[1614]: time="2026-01-24T00:51:42.520881185Z" level=info msg="starting plugins..." Jan 24 00:51:42.521031 containerd[1614]: time="2026-01-24T00:51:42.521008642Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 00:51:42.521767 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:51:42.533785 containerd[1614]: time="2026-01-24T00:51:42.533752074Z" level=info msg="containerd successfully booted in 3.000126s" Jan 24 00:51:42.595733 tar[1610]: linux-amd64/README.md Jan 24 00:51:42.674498 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:51:43.426712 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 47548 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:51:43.496994 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:43.620859 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:51:43.635365 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:51:43.678697 systemd-logind[1597]: New session 1 of user core. Jan 24 00:51:43.799954 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:51:43.823664 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:51:43.913975 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:43.934838 systemd-logind[1597]: New session 2 of user core. Jan 24 00:51:45.186425 systemd[1708]: Queued start job for default target default.target. Jan 24 00:51:45.207874 systemd[1708]: Created slice app.slice - User Application Slice. Jan 24 00:51:45.207986 systemd[1708]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 00:51:45.208009 systemd[1708]: Reached target paths.target - Paths. Jan 24 00:51:45.208108 systemd[1708]: Reached target timers.target - Timers. Jan 24 00:51:45.213100 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:51:45.218842 systemd[1708]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 00:51:45.464929 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:51:45.465151 systemd[1708]: Reached target sockets.target - Sockets. Jan 24 00:51:45.478864 systemd[1708]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 00:51:45.479016 systemd[1708]: Reached target basic.target - Basic System. Jan 24 00:51:45.480486 systemd[1708]: Reached target default.target - Main User Target. Jan 24 00:51:45.480597 systemd[1708]: Startup finished in 1.496s. Jan 24 00:51:45.480931 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:51:45.490143 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:51:45.554842 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:60702.service - OpenSSH per-connection server daemon (10.0.0.1:60702). Jan 24 00:51:46.114126 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 60702 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:51:46.122081 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:46.160986 systemd-logind[1597]: New session 3 of user core. Jan 24 00:51:46.192070 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:51:46.516662 sshd[1731]: Connection closed by 10.0.0.1 port 60702 Jan 24 00:51:46.519775 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:46.554845 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Jan 24 00:51:46.558762 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:60702.service: Deactivated successfully. Jan 24 00:51:46.562892 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:51:46.567978 systemd-logind[1597]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:51:46.574807 systemd-logind[1597]: Removed session 3. Jan 24 00:51:47.282619 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:51:47.289128 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:47.319480 systemd-logind[1597]: New session 4 of user core. Jan 24 00:51:47.343981 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:51:47.475498 sshd[1741]: Connection closed by 10.0.0.1 port 60704 Jan 24 00:51:47.475909 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:47.491043 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:60704.service: Deactivated successfully. Jan 24 00:51:47.495520 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:51:47.505451 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:51:47.514001 systemd-logind[1597]: Removed session 4. Jan 24 00:51:48.600855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:51:48.608125 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:51:48.608887 systemd[1]: Startup finished in 22.076s (kernel) + 1min 21.600s (initrd) + 30.327s (userspace) = 2min 14.004s. Jan 24 00:51:48.684899 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:51:54.476664 kubelet[1750]: E0124 00:51:54.475438 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:51:54.487695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:51:54.491048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:51:54.499095 systemd[1]: kubelet.service: Consumed 9.684s CPU time, 267.5M memory peak. Jan 24 00:51:57.582666 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:34452.service - OpenSSH per-connection server daemon (10.0.0.1:34452). Jan 24 00:51:57.935840 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 34452 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:51:57.969731 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:58.025398 systemd-logind[1597]: New session 5 of user core. Jan 24 00:51:58.065795 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:51:58.286975 sshd[1765]: Connection closed by 10.0.0.1 port 34452 Jan 24 00:51:58.288121 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:58.312765 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:34452.service: Deactivated successfully. Jan 24 00:51:58.316905 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:51:58.322675 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:51:58.333570 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:34454.service - OpenSSH per-connection server daemon (10.0.0.1:34454). Jan 24 00:51:58.340743 systemd-logind[1597]: Removed session 5. Jan 24 00:51:58.808817 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 34454 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:51:58.811892 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:58.835577 systemd-logind[1597]: New session 6 of user core. Jan 24 00:51:58.868941 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:51:59.077106 sshd[1776]: Connection closed by 10.0.0.1 port 34454 Jan 24 00:51:59.092485 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:59.109651 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:34466.service - OpenSSH per-connection server daemon (10.0.0.1:34466). Jan 24 00:51:59.119635 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:34454.service: Deactivated successfully. Jan 24 00:51:59.126621 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:51:59.153441 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:51:59.163809 systemd-logind[1597]: Removed session 6. Jan 24 00:51:59.357093 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 34466 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:51:59.367222 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:59.628549 systemd-logind[1597]: New session 7 of user core. Jan 24 00:51:59.703493 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:51:59.868036 sshd[1786]: Connection closed by 10.0.0.1 port 34466 Jan 24 00:51:59.871762 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:59.894050 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:34478.service - OpenSSH per-connection server daemon (10.0.0.1:34478). Jan 24 00:51:59.907101 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:34466.service: Deactivated successfully. Jan 24 00:51:59.912774 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:51:59.932100 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:51:59.940095 systemd-logind[1597]: Removed session 7. Jan 24 00:52:00.119080 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 34478 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:52:00.122633 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:00.140228 systemd-logind[1597]: New session 8 of user core. Jan 24 00:52:00.157839 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:52:00.245606 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:52:00.246618 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:00.291791 sudo[1797]: pam_unix(sudo:session): session closed for user root Jan 24 00:52:00.296992 sshd[1796]: Connection closed by 10.0.0.1 port 34478 Jan 24 00:52:00.297869 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:00.320006 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:34478.service: Deactivated successfully. Jan 24 00:52:00.324047 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:52:00.329694 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:52:00.333720 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:34494.service - OpenSSH per-connection server daemon (10.0.0.1:34494). Jan 24 00:52:00.349449 systemd-logind[1597]: Removed session 8. Jan 24 00:52:00.514617 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 34494 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:52:00.528120 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:00.583769 systemd-logind[1597]: New session 9 of user core. Jan 24 00:52:00.604052 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:52:00.722573 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:52:00.723218 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:00.747810 sudo[1810]: pam_unix(sudo:session): session closed for user root Jan 24 00:52:00.773803 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 00:52:00.777952 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:00.806818 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:52:01.060000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:52:01.063597 augenrules[1834]: No rules Jan 24 00:52:01.068509 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 24 00:52:01.068593 kernel: audit: type=1305 audit(1769215921.060:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:52:01.069791 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:52:01.070425 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:52:01.072726 sudo[1809]: pam_unix(sudo:session): session closed for user root Jan 24 00:52:01.083091 sshd[1808]: Connection closed by 10.0.0.1 port 34494 Jan 24 00:52:01.083994 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:01.060000 audit[1834]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc65be640 a2=420 a3=0 items=0 ppid=1815 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:01.115904 kernel: audit: type=1300 audit(1769215921.060:215): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc65be640 a2=420 a3=0 items=0 ppid=1815 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:01.118662 kernel: audit: type=1327 audit(1769215921.060:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:52:01.060000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:52:01.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.127708 kernel: audit: type=1130 audit(1769215921.070:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.070000 audit[1809]: USER_END pid=1809 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.202482 kernel: audit: type=1131 audit(1769215921.070:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.202776 kernel: audit: type=1106 audit(1769215921.070:218): pid=1809 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.202904 kernel: audit: type=1104 audit(1769215921.070:219): pid=1809 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.070000 audit[1809]: CRED_DISP pid=1809 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.236501 kernel: audit: type=1106 audit(1769215921.091:220): pid=1804 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.091000 audit[1804]: USER_END pid=1804 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.091000 audit[1804]: CRED_DISP pid=1804 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.302004 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:34494.service: Deactivated successfully. Jan 24 00:52:01.306551 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:52:01.319864 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:52:01.327405 kernel: audit: type=1104 audit(1769215921.091:221): pid=1804 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.327484 kernel: audit: type=1131 audit(1769215921.301:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.105:22-10.0.0.1:34494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.105:22-10.0.0.1:34494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.368188 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:34506.service - OpenSSH per-connection server daemon (10.0.0.1:34506). Jan 24 00:52:01.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.105:22-10.0.0.1:34506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:01.378125 systemd-logind[1597]: Removed session 9. Jan 24 00:52:01.807000 audit[1843]: USER_ACCT pid=1843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.809894 sshd[1843]: Accepted publickey for core from 10.0.0.1 port 34506 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:52:01.810000 audit[1843]: CRED_ACQ pid=1843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.810000 audit[1843]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc94c5200 a2=3 a3=0 items=0 ppid=1 pid=1843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:01.810000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:01.813130 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:01.868088 systemd-logind[1597]: New session 10 of user core. Jan 24 00:52:01.889846 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:52:01.900000 audit[1843]: USER_START pid=1843 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.913000 audit[1847]: CRED_ACQ pid=1847 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:52:01.998000 audit[1848]: USER_ACCT pid=1848 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:52:02.001057 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:52:02.002000 audit[1848]: CRED_REFR pid=1848 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:52:02.002000 audit[1848]: USER_START pid=1848 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:52:02.004192 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:04.802992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:52:04.824413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:52:05.662923 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:52:05.725096 (dockerd)[1874]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:52:09.715722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:52:09.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:09.726416 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 24 00:52:09.726617 kernel: audit: type=1130 audit(1769215929.714:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:09.779938 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:52:10.099758 kubelet[1886]: E0124 00:52:10.098663 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:52:10.115936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:52:10.116779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:52:10.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:10.118211 systemd[1]: kubelet.service: Consumed 2.644s CPU time, 110.9M memory peak. Jan 24 00:52:10.162776 kernel: audit: type=1131 audit(1769215930.116:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:10.299766 dockerd[1874]: time="2026-01-24T00:52:10.295658070Z" level=info msg="Starting up" Jan 24 00:52:10.312494 dockerd[1874]: time="2026-01-24T00:52:10.310944025Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 00:52:10.445083 dockerd[1874]: time="2026-01-24T00:52:10.443688085Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 00:52:10.696413 systemd[1]: var-lib-docker-metacopy\x2dcheck1542797971-merged.mount: Deactivated successfully. Jan 24 00:52:10.799862 dockerd[1874]: time="2026-01-24T00:52:10.799226569Z" level=info msg="Loading containers: start." Jan 24 00:52:10.861665 kernel: Initializing XFRM netlink socket Jan 24 00:52:11.359000 audit[1940]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.392963 kernel: audit: type=1325 audit(1769215931.359:234): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.393101 kernel: audit: type=1300 audit(1769215931.359:234): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe18c7e220 a2=0 a3=0 items=0 ppid=1874 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.359000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe18c7e220 a2=0 a3=0 items=0 ppid=1874 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:52:11.438022 kernel: audit: type=1327 audit(1769215931.359:234): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:52:11.438137 kernel: audit: type=1325 audit(1769215931.387:235): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.387000 audit[1942]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.455145 kernel: audit: type=1300 audit(1769215931.387:235): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc4e05f6d0 a2=0 a3=0 items=0 ppid=1874 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.387000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc4e05f6d0 a2=0 a3=0 items=0 ppid=1874 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.484577 kernel: audit: type=1327 audit(1769215931.387:235): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:52:11.387000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:52:11.414000 audit[1944]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.515114 kernel: audit: type=1325 audit(1769215931.414:236): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.515223 kernel: audit: type=1300 audit(1769215931.414:236): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef89816f0 a2=0 a3=0 items=0 ppid=1874 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.414000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef89816f0 a2=0 a3=0 items=0 ppid=1874 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.414000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:52:11.431000 audit[1946]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.431000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2485c2f0 a2=0 a3=0 items=0 ppid=1874 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.431000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:52:11.445000 audit[1948]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.445000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffcfe266f0 a2=0 a3=0 items=0 ppid=1874 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:52:11.476000 audit[1950]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.476000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffd6b50450 a2=0 a3=0 items=0 ppid=1874 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:52:11.489000 audit[1952]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.489000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd914f9600 a2=0 a3=0 items=0 ppid=1874 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.489000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:52:11.503000 audit[1954]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.503000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe6a8a7a70 a2=0 a3=0 items=0 ppid=1874 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.503000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:52:11.709000 audit[1957]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.709000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff3136c030 a2=0 a3=0 items=0 ppid=1874 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 00:52:11.730000 audit[1959]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.730000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffecd2882a0 a2=0 a3=0 items=0 ppid=1874 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.730000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:52:11.744000 audit[1961]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.744000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc8294e7d0 a2=0 a3=0 items=0 ppid=1874 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:52:11.757000 audit[1963]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.757000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe4447fc70 a2=0 a3=0 items=0 ppid=1874 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:52:11.779000 audit[1965]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:11.779000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff456baca0 a2=0 a3=0 items=0 ppid=1874 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:11.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:52:12.285000 audit[1995]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.285000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd96297030 a2=0 a3=0 items=0 ppid=1874 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:52:12.299000 audit[1997]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.299000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe84d66a60 a2=0 a3=0 items=0 ppid=1874 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:52:12.319000 audit[1999]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.319000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd332c4cb0 a2=0 a3=0 items=0 ppid=1874 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:52:12.335000 audit[2001]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.335000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc243978e0 a2=0 a3=0 items=0 ppid=1874 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.335000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:52:12.345000 audit[2003]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.345000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdeecadba0 a2=0 a3=0 items=0 ppid=1874 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:52:12.366000 audit[2005]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.366000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcce513f00 a2=0 a3=0 items=0 ppid=1874 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:52:12.384000 audit[2007]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.384000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff85812770 a2=0 a3=0 items=0 ppid=1874 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:52:12.398000 audit[2009]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.398000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff616a6310 a2=0 a3=0 items=0 ppid=1874 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:52:12.412000 audit[2011]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.412000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffeb4b126b0 a2=0 a3=0 items=0 ppid=1874 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 00:52:12.425000 audit[2013]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.425000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd616ebfd0 a2=0 a3=0 items=0 ppid=1874 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:52:12.440000 audit[2015]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.440000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd4d127d20 a2=0 a3=0 items=0 ppid=1874 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.440000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:52:12.459000 audit[2017]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.459000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffeabb85460 a2=0 a3=0 items=0 ppid=1874 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.459000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:52:12.483000 audit[2019]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.483000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff4bfeee00 a2=0 a3=0 items=0 ppid=1874 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:52:12.539000 audit[2024]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.539000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe66d6aa50 a2=0 a3=0 items=0 ppid=1874 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.539000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:52:12.565000 audit[2026]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.565000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffebc7ba750 a2=0 a3=0 items=0 ppid=1874 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:52:12.587000 audit[2028]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.587000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff3b7171e0 a2=0 a3=0 items=0 ppid=1874 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.587000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:52:12.598000 audit[2030]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.598000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde5a2e8e0 a2=0 a3=0 items=0 ppid=1874 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:52:12.620000 audit[2032]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.620000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc8ed72e40 a2=0 a3=0 items=0 ppid=1874 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.620000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:52:12.638000 audit[2034]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:52:12.638000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffca7e95b20 a2=0 a3=0 items=0 ppid=1874 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:52:12.715000 audit[2039]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.715000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc88250660 a2=0 a3=0 items=0 ppid=1874 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 00:52:12.732000 audit[2041]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.732000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff40bbfaf0 a2=0 a3=0 items=0 ppid=1874 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.732000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 00:52:12.866000 audit[2049]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.866000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe94ec7890 a2=0 a3=0 items=0 ppid=1874 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 00:52:12.961000 audit[2055]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.961000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe7e345420 a2=0 a3=0 items=0 ppid=1874 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 00:52:12.990000 audit[2057]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:12.990000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffeff297eb0 a2=0 a3=0 items=0 ppid=1874 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:12.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 00:52:13.007000 audit[2059]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:13.007000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1dbe6260 a2=0 a3=0 items=0 ppid=1874 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:13.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 00:52:13.029000 audit[2061]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:13.029000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd64372a50 a2=0 a3=0 items=0 ppid=1874 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:13.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:52:13.046000 audit[2063]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:52:13.046000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffde44c45f0 a2=0 a3=0 items=0 ppid=1874 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:13.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 00:52:13.055478 systemd-networkd[1509]: docker0: Link UP Jan 24 00:52:13.079399 dockerd[1874]: time="2026-01-24T00:52:13.074866988Z" level=info msg="Loading containers: done." Jan 24 00:52:13.175556 dockerd[1874]: time="2026-01-24T00:52:13.175041359Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:52:13.176857 dockerd[1874]: time="2026-01-24T00:52:13.176623653Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 00:52:13.176857 dockerd[1874]: time="2026-01-24T00:52:13.176853570Z" level=info msg="Initializing buildkit" Jan 24 00:52:13.407122 dockerd[1874]: time="2026-01-24T00:52:13.405082846Z" level=info msg="Completed buildkit initialization" Jan 24 00:52:13.430690 dockerd[1874]: time="2026-01-24T00:52:13.430142897Z" level=info msg="Daemon has completed initialization" Jan 24 00:52:13.431163 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:52:13.439958 dockerd[1874]: time="2026-01-24T00:52:13.431670598Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:52:13.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:15.916901 containerd[1614]: time="2026-01-24T00:52:15.916630534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:52:17.473767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928925335.mount: Deactivated successfully. Jan 24 00:52:20.185752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:52:20.198923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:52:20.861701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:52:20.873630 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 24 00:52:20.874965 kernel: audit: type=1130 audit(1769215940.861:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:20.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:20.908049 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:52:21.122975 kubelet[2173]: E0124 00:52:21.122207 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:52:21.129711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:52:21.130635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:52:21.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:21.132657 systemd[1]: kubelet.service: Consumed 576ms CPU time, 110.4M memory peak. Jan 24 00:52:21.161989 kernel: audit: type=1131 audit(1769215941.131:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:23.620062 update_engine[1599]: I20260124 00:52:23.619144 1599 update_attempter.cc:509] Updating boot flags... Jan 24 00:52:24.903540 containerd[1614]: time="2026-01-24T00:52:24.902125724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:24.909587 containerd[1614]: time="2026-01-24T00:52:24.909545533Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28895740" Jan 24 00:52:24.917821 containerd[1614]: time="2026-01-24T00:52:24.917771752Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:24.944799 containerd[1614]: time="2026-01-24T00:52:24.944640060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:24.949993 containerd[1614]: time="2026-01-24T00:52:24.949943540Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 9.033262625s" Jan 24 00:52:24.950169 containerd[1614]: time="2026-01-24T00:52:24.950139116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:52:24.958966 containerd[1614]: time="2026-01-24T00:52:24.958929806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:52:31.200860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:52:31.231759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:52:31.821015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:52:31.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:31.866441 kernel: audit: type=1130 audit(1769215951.821:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:31.888746 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:52:32.180686 kubelet[2210]: E0124 00:52:32.179489 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:52:32.191719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:52:32.193646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:52:32.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:32.198989 systemd[1]: kubelet.service: Consumed 585ms CPU time, 109.4M memory peak. Jan 24 00:52:32.254009 kernel: audit: type=1131 audit(1769215952.198:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:32.305680 containerd[1614]: time="2026-01-24T00:52:32.304986658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:32.316545 containerd[1614]: time="2026-01-24T00:52:32.316496471Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 24 00:52:32.335800 containerd[1614]: time="2026-01-24T00:52:32.335583755Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:32.354577 containerd[1614]: time="2026-01-24T00:52:32.350483360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:32.354577 containerd[1614]: time="2026-01-24T00:52:32.353236236Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 7.394118487s" Jan 24 00:52:32.354577 containerd[1614]: time="2026-01-24T00:52:32.353486300Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:52:32.358064 containerd[1614]: time="2026-01-24T00:52:32.357624569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:52:45.053945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:52:45.827966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:52:48.542648 containerd[1614]: time="2026-01-24T00:52:48.541441600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:48.551679 containerd[1614]: time="2026-01-24T00:52:48.549204814Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 24 00:52:48.561594 containerd[1614]: time="2026-01-24T00:52:48.561177160Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:48.573706 containerd[1614]: time="2026-01-24T00:52:48.573454733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:48.577659 containerd[1614]: time="2026-01-24T00:52:48.577432268Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 16.219502743s" Jan 24 00:52:48.577659 containerd[1614]: time="2026-01-24T00:52:48.577653153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:52:48.595082 containerd[1614]: time="2026-01-24T00:52:48.593705826Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:52:49.769844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:52:49.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:49.810545 kernel: audit: type=1130 audit(1769215969.771:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:49.835198 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:52:50.268189 kubelet[2231]: E0124 00:52:50.267597 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:52:50.274097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:52:50.274721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:52:50.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:50.276164 systemd[1]: kubelet.service: Consumed 2.723s CPU time, 109.8M memory peak. Jan 24 00:52:50.308715 kernel: audit: type=1131 audit(1769215970.275:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:52:52.178845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763229560.mount: Deactivated successfully. Jan 24 00:52:58.306170 containerd[1614]: time="2026-01-24T00:52:58.305919208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:58.327481 containerd[1614]: time="2026-01-24T00:52:58.327230444Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 24 00:52:58.332431 containerd[1614]: time="2026-01-24T00:52:58.332068686Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:58.351058 containerd[1614]: time="2026-01-24T00:52:58.350602068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:58.356093 containerd[1614]: time="2026-01-24T00:52:58.353944296Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 9.760164884s" Jan 24 00:52:58.356093 containerd[1614]: time="2026-01-24T00:52:58.353988328Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:52:58.365549 containerd[1614]: time="2026-01-24T00:52:58.365499353Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:53:00.501866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 24 00:53:00.722511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:04.834104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906747279.mount: Deactivated successfully. Jan 24 00:53:05.736632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:05.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:05.785598 kernel: audit: type=1130 audit(1769215985.735:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:05.816965 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:53:07.470116 kubelet[2266]: E0124 00:53:07.469448 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:53:07.717802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:53:07.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:07.718095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:53:07.725847 systemd[1]: kubelet.service: Consumed 2.437s CPU time, 109.9M memory peak. Jan 24 00:53:07.800532 kernel: audit: type=1131 audit(1769215987.724:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:18.103654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 24 00:53:18.119905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:19.373956 containerd[1614]: time="2026-01-24T00:53:19.370210691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:19.382056 containerd[1614]: time="2026-01-24T00:53:19.378496228Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18554556" Jan 24 00:53:19.387953 containerd[1614]: time="2026-01-24T00:53:19.387847845Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:19.420524 containerd[1614]: time="2026-01-24T00:53:19.419910508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:19.425138 containerd[1614]: time="2026-01-24T00:53:19.423939489Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 21.058172588s" Jan 24 00:53:19.425138 containerd[1614]: time="2026-01-24T00:53:19.424086440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:53:19.446869 containerd[1614]: time="2026-01-24T00:53:19.445801648Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:53:20.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:20.617563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:20.681739 kernel: audit: type=1130 audit(1769216000.617:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:20.749751 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:53:21.110206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943492880.mount: Deactivated successfully. Jan 24 00:53:21.181492 containerd[1614]: time="2026-01-24T00:53:21.178852511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:21.192042 containerd[1614]: time="2026-01-24T00:53:21.191128911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 24 00:53:21.213921 containerd[1614]: time="2026-01-24T00:53:21.207120198Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:21.389118 containerd[1614]: time="2026-01-24T00:53:21.386058466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:21.473144 containerd[1614]: time="2026-01-24T00:53:21.472043436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.026183789s" Jan 24 00:53:21.473144 containerd[1614]: time="2026-01-24T00:53:21.472518335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:53:21.487940 containerd[1614]: time="2026-01-24T00:53:21.483809533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:53:22.310987 kubelet[2322]: E0124 00:53:22.309825 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:53:22.321741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:53:22.322099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:53:22.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:22.323066 systemd[1]: kubelet.service: Consumed 2.358s CPU time, 110.3M memory peak. Jan 24 00:53:22.386880 kernel: audit: type=1131 audit(1769216002.318:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:25.259714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888010462.mount: Deactivated successfully. Jan 24 00:53:32.513710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 24 00:53:32.525733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:34.677016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:34.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:34.704093 kernel: audit: type=1130 audit(1769216014.678:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:34.724110 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:53:36.118976 kubelet[2396]: E0124 00:53:36.118095 2396 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:53:36.126957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:53:36.129239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:53:36.132660 systemd[1]: kubelet.service: Consumed 2.290s CPU time, 110.4M memory peak. Jan 24 00:53:36.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:36.179224 kernel: audit: type=1131 audit(1769216016.131:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:43.662200 containerd[1614]: time="2026-01-24T00:53:43.661935639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:43.665803 containerd[1614]: time="2026-01-24T00:53:43.665513678Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57671516" Jan 24 00:53:43.670926 containerd[1614]: time="2026-01-24T00:53:43.670704748Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:43.681578 containerd[1614]: time="2026-01-24T00:53:43.681213929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:43.683501 containerd[1614]: time="2026-01-24T00:53:43.683102442Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 22.199236665s" Jan 24 00:53:43.683611 containerd[1614]: time="2026-01-24T00:53:43.683513219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:53:46.187753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 24 00:53:46.197041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:46.681539 kernel: audit: type=1130 audit(1769216026.652:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:46.653621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:46.701568 (kubelet)[2439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:53:46.893913 kubelet[2439]: E0124 00:53:46.893463 2439 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:53:46.899424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:53:46.899871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:53:46.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:46.901582 systemd[1]: kubelet.service: Consumed 564ms CPU time, 110.7M memory peak. Jan 24 00:53:46.931664 kernel: audit: type=1131 audit(1769216026.900:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:48.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:48.253048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:48.253814 systemd[1]: kubelet.service: Consumed 564ms CPU time, 110.7M memory peak. Jan 24 00:53:48.262056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:48.309232 kernel: audit: type=1130 audit(1769216028.250:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:48.309524 kernel: audit: type=1131 audit(1769216028.252:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:48.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:48.346124 systemd[1]: Reload requested from client PID 2453 ('systemctl') (unit session-10.scope)... Jan 24 00:53:48.346572 systemd[1]: Reloading... Jan 24 00:53:48.593792 zram_generator::config[2502]: No configuration found. Jan 24 00:53:49.176787 systemd[1]: Reloading finished in 829 ms. Jan 24 00:53:49.246000 audit: BPF prog-id=61 op=LOAD Jan 24 00:53:49.259618 kernel: audit: type=1334 audit(1769216029.246:291): prog-id=61 op=LOAD Jan 24 00:53:49.259000 audit: BPF prog-id=58 op=UNLOAD Jan 24 00:53:49.282609 kernel: audit: type=1334 audit(1769216029.259:292): prog-id=58 op=UNLOAD Jan 24 00:53:49.282860 kernel: audit: type=1334 audit(1769216029.259:293): prog-id=62 op=LOAD Jan 24 00:53:49.259000 audit: BPF prog-id=62 op=LOAD Jan 24 00:53:49.293564 kernel: audit: type=1334 audit(1769216029.260:294): prog-id=63 op=LOAD Jan 24 00:53:49.260000 audit: BPF prog-id=63 op=LOAD Jan 24 00:53:49.260000 audit: BPF prog-id=59 op=UNLOAD Jan 24 00:53:49.306485 kernel: audit: type=1334 audit(1769216029.260:295): prog-id=59 op=UNLOAD Jan 24 00:53:49.260000 audit: BPF prog-id=60 op=UNLOAD Jan 24 00:53:49.261000 audit: BPF prog-id=64 op=LOAD Jan 24 00:53:49.262000 audit: BPF prog-id=56 op=UNLOAD Jan 24 00:53:49.317716 kernel: audit: type=1334 audit(1769216029.260:296): prog-id=60 op=UNLOAD Jan 24 00:53:49.266000 audit: BPF prog-id=65 op=LOAD Jan 24 00:53:49.266000 audit: BPF prog-id=47 op=UNLOAD Jan 24 00:53:49.266000 audit: BPF prog-id=66 op=LOAD Jan 24 00:53:49.266000 audit: BPF prog-id=67 op=LOAD Jan 24 00:53:49.266000 audit: BPF prog-id=48 op=UNLOAD Jan 24 00:53:49.266000 audit: BPF prog-id=49 op=UNLOAD Jan 24 00:53:49.273000 audit: BPF prog-id=68 op=LOAD Jan 24 00:53:49.273000 audit: BPF prog-id=57 op=UNLOAD Jan 24 00:53:49.274000 audit: BPF prog-id=69 op=LOAD Jan 24 00:53:49.275000 audit: BPF prog-id=53 op=UNLOAD Jan 24 00:53:49.275000 audit: BPF prog-id=70 op=LOAD Jan 24 00:53:49.275000 audit: BPF prog-id=71 op=LOAD Jan 24 00:53:49.275000 audit: BPF prog-id=54 op=UNLOAD Jan 24 00:53:49.275000 audit: BPF prog-id=55 op=UNLOAD Jan 24 00:53:49.277000 audit: BPF prog-id=72 op=LOAD Jan 24 00:53:49.277000 audit: BPF prog-id=44 op=UNLOAD Jan 24 00:53:49.277000 audit: BPF prog-id=73 op=LOAD Jan 24 00:53:49.277000 audit: BPF prog-id=74 op=LOAD Jan 24 00:53:49.277000 audit: BPF prog-id=45 op=UNLOAD Jan 24 00:53:49.277000 audit: BPF prog-id=46 op=UNLOAD Jan 24 00:53:49.279000 audit: BPF prog-id=75 op=LOAD Jan 24 00:53:49.279000 audit: BPF prog-id=41 op=UNLOAD Jan 24 00:53:49.279000 audit: BPF prog-id=76 op=LOAD Jan 24 00:53:49.279000 audit: BPF prog-id=77 op=LOAD Jan 24 00:53:49.279000 audit: BPF prog-id=42 op=UNLOAD Jan 24 00:53:49.279000 audit: BPF prog-id=43 op=UNLOAD Jan 24 00:53:49.283000 audit: BPF prog-id=78 op=LOAD Jan 24 00:53:49.283000 audit: BPF prog-id=50 op=UNLOAD Jan 24 00:53:49.283000 audit: BPF prog-id=79 op=LOAD Jan 24 00:53:49.283000 audit: BPF prog-id=80 op=LOAD Jan 24 00:53:49.284000 audit: BPF prog-id=51 op=UNLOAD Jan 24 00:53:49.284000 audit: BPF prog-id=52 op=UNLOAD Jan 24 00:53:49.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:53:49.350581 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:53:49.350811 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:53:49.351645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:49.351807 systemd[1]: kubelet.service: Consumed 248ms CPU time, 98.7M memory peak. Jan 24 00:53:49.363786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:50.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:50.004900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:50.066602 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:53:50.388617 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:53:50.388617 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:53:50.388617 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:53:50.388617 kubelet[2547]: I0124 00:53:50.388050 2547 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:53:51.601660 kubelet[2547]: I0124 00:53:51.601604 2547 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:53:51.605472 kubelet[2547]: I0124 00:53:51.603729 2547 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:53:51.605472 kubelet[2547]: I0124 00:53:51.604727 2547 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:53:51.771101 kubelet[2547]: I0124 00:53:51.768892 2547 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:53:51.782463 kubelet[2547]: E0124 00:53:51.772063 2547 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:51.847959 kubelet[2547]: I0124 00:53:51.847494 2547 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:53:51.873598 kubelet[2547]: I0124 00:53:51.870716 2547 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:53:51.873598 kubelet[2547]: I0124 00:53:51.872694 2547 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:53:51.875107 kubelet[2547]: I0124 00:53:51.872731 2547 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:53:51.875107 kubelet[2547]: I0124 00:53:51.874565 2547 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:53:51.875107 kubelet[2547]: I0124 00:53:51.874592 2547 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:53:51.875770 kubelet[2547]: I0124 00:53:51.875179 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:51.884029 kubelet[2547]: I0124 00:53:51.883807 2547 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:53:51.884094 kubelet[2547]: I0124 00:53:51.884078 2547 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:53:51.884148 kubelet[2547]: I0124 00:53:51.884117 2547 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:53:51.884450 kubelet[2547]: I0124 00:53:51.884439 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:53:51.896759 kubelet[2547]: W0124 00:53:51.894878 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:51.906797 kubelet[2547]: E0124 00:53:51.905521 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:51.906797 kubelet[2547]: W0124 00:53:51.905767 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:51.906797 kubelet[2547]: E0124 00:53:51.905830 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:51.911090 kubelet[2547]: I0124 00:53:51.910576 2547 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:53:51.918630 kubelet[2547]: I0124 00:53:51.915835 2547 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:53:51.918630 kubelet[2547]: W0124 00:53:51.916706 2547 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:53:51.923559 kubelet[2547]: I0124 00:53:51.922888 2547 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:53:51.923559 kubelet[2547]: I0124 00:53:51.923139 2547 server.go:1287] "Started kubelet" Jan 24 00:53:51.936557 kubelet[2547]: I0124 00:53:51.928088 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:53:51.936557 kubelet[2547]: I0124 00:53:51.933946 2547 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:53:51.936557 kubelet[2547]: I0124 00:53:51.935632 2547 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:53:51.939967 kubelet[2547]: I0124 00:53:51.939772 2547 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:53:51.943794 kubelet[2547]: I0124 00:53:51.943772 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:53:51.958647 kubelet[2547]: I0124 00:53:51.957019 2547 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:53:51.965824 kubelet[2547]: E0124 00:53:51.961901 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:51.965824 kubelet[2547]: I0124 00:53:51.962816 2547 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:53:51.965824 kubelet[2547]: I0124 00:53:51.963850 2547 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:53:51.965824 kubelet[2547]: I0124 00:53:51.964640 2547 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:53:51.967106 kubelet[2547]: E0124 00:53:51.961096 2547 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d849b79b3f0ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:53:51.923106028 +0000 UTC m=+1.803533812,LastTimestamp:2026-01-24 00:53:51.923106028 +0000 UTC m=+1.803533812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:53:51.972072 kubelet[2547]: E0124 00:53:51.970865 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Jan 24 00:53:51.978423 kubelet[2547]: I0124 00:53:51.973869 2547 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:53:51.978423 kubelet[2547]: I0124 00:53:51.973995 2547 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:53:51.978423 kubelet[2547]: E0124 00:53:51.976499 2547 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:53:51.978423 kubelet[2547]: W0124 00:53:51.976801 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:51.978423 kubelet[2547]: E0124 00:53:51.976860 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:51.984646 kubelet[2547]: I0124 00:53:51.981919 2547 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:53:52.019000 audit[2565]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.028877 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 24 00:53:52.028965 kernel: audit: type=1325 audit(1769216032.019:333): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.062998 kubelet[2547]: E0124 00:53:52.062959 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:52.019000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce3a7d060 a2=0 a3=0 items=0 ppid=2547 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.069966 kubelet[2547]: I0124 00:53:52.069179 2547 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:53:52.069966 kubelet[2547]: I0124 00:53:52.069202 2547 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:53:52.069966 kubelet[2547]: I0124 00:53:52.069570 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:52.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:53:52.118740 kernel: audit: type=1300 audit(1769216032.019:333): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce3a7d060 a2=0 a3=0 items=0 ppid=2547 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.118876 kernel: audit: type=1327 audit(1769216032.019:333): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:53:52.118915 kernel: audit: type=1325 audit(1769216032.080:334): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.080000 audit[2568]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.137555 kernel: audit: type=1300 audit(1769216032.080:334): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff581abb00 a2=0 a3=0 items=0 ppid=2547 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.080000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff581abb00 a2=0 a3=0 items=0 ppid=2547 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.164074 kubelet[2547]: E0124 00:53:52.163912 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:52.173522 kubelet[2547]: E0124 00:53:52.172808 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Jan 24 00:53:52.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:53:52.204607 kernel: audit: type=1327 audit(1769216032.080:334): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:53:52.204845 kernel: audit: type=1325 audit(1769216032.118:335): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.118000 audit[2570]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.227572 kernel: audit: type=1300 audit(1769216032.118:335): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe59b320c0 a2=0 a3=0 items=0 ppid=2547 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.118000 audit[2570]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe59b320c0 a2=0 a3=0 items=0 ppid=2547 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.238700 kubelet[2547]: I0124 00:53:52.235990 2547 policy_none.go:49] "None policy: Start" Jan 24 00:53:52.238700 kubelet[2547]: I0124 00:53:52.237577 2547 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:53:52.238700 kubelet[2547]: I0124 00:53:52.237767 2547 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:53:52.264537 kubelet[2547]: E0124 00:53:52.264506 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:52.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:53:52.147000 audit[2572]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.301046 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:53:52.319608 kernel: audit: type=1327 audit(1769216032.118:335): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:53:52.319697 kernel: audit: type=1325 audit(1769216032.147:336): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.147000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffed4965f50 a2=0 a3=0 items=0 ppid=2547 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:53:52.335533 kubelet[2547]: I0124 00:53:52.335113 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:53:52.332000 audit[2576]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.332000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd878879f0 a2=0 a3=0 items=0 ppid=2547 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 24 00:53:52.343000 audit[2578]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.343000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6fa8e290 a2=0 a3=0 items=0 ppid=2547 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:53:52.357000 audit[2577]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:53:52.357000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd94874740 a2=0 a3=0 items=0 ppid=2547 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:53:52.364139 kubelet[2547]: I0124 00:53:52.360656 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:53:52.364139 kubelet[2547]: I0124 00:53:52.360945 2547 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:53:52.364139 kubelet[2547]: I0124 00:53:52.361461 2547 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:53:52.364139 kubelet[2547]: I0124 00:53:52.361475 2547 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:53:52.364139 kubelet[2547]: E0124 00:53:52.361553 2547 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:53:52.364139 kubelet[2547]: W0124 00:53:52.363051 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:52.364139 kubelet[2547]: E0124 00:53:52.363544 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:52.365118 kubelet[2547]: E0124 00:53:52.364751 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:52.366585 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:53:52.369000 audit[2579]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.369000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe03e09270 a2=0 a3=0 items=0 ppid=2547 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:53:52.371000 audit[2580]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:53:52.371000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd619fa9c0 a2=0 a3=0 items=0 ppid=2547 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:53:52.377153 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:53:52.378000 audit[2582]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:53:52.378000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe87a601d0 a2=0 a3=0 items=0 ppid=2547 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:53:52.380000 audit[2581]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:53:52.380000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd78cd23a0 a2=0 a3=0 items=0 ppid=2547 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.380000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:53:52.385000 audit[2583]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:53:52.385000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd66eaf660 a2=0 a3=0 items=0 ppid=2547 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:52.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:53:52.393641 kubelet[2547]: I0124 00:53:52.391443 2547 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:53:52.394011 kubelet[2547]: I0124 00:53:52.393819 2547 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:53:52.397680 kubelet[2547]: I0124 00:53:52.394788 2547 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:53:52.397680 kubelet[2547]: I0124 00:53:52.395979 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:53:52.403677 kubelet[2547]: E0124 00:53:52.401795 2547 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:53:52.403677 kubelet[2547]: E0124 00:53:52.403199 2547 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:53:52.473523 kubelet[2547]: I0124 00:53:52.472912 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40ed8643d96e527d70b60692f1e07016-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40ed8643d96e527d70b60692f1e07016\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:52.473676 kubelet[2547]: I0124 00:53:52.473625 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40ed8643d96e527d70b60692f1e07016-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40ed8643d96e527d70b60692f1e07016\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:52.473676 kubelet[2547]: I0124 00:53:52.473663 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40ed8643d96e527d70b60692f1e07016-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40ed8643d96e527d70b60692f1e07016\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:52.501983 kubelet[2547]: I0124 00:53:52.500823 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:52.501983 kubelet[2547]: E0124 00:53:52.501564 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Jan 24 00:53:52.510975 systemd[1]: Created slice kubepods-burstable-pod40ed8643d96e527d70b60692f1e07016.slice - libcontainer container kubepods-burstable-pod40ed8643d96e527d70b60692f1e07016.slice. Jan 24 00:53:52.546541 kubelet[2547]: E0124 00:53:52.546227 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:52.559219 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 24 00:53:52.575804 kubelet[2547]: I0124 00:53:52.574554 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:52.575804 kubelet[2547]: I0124 00:53:52.574615 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:52.575804 kubelet[2547]: E0124 00:53:52.574721 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Jan 24 00:53:52.575804 kubelet[2547]: I0124 00:53:52.574812 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:52.575804 kubelet[2547]: I0124 00:53:52.574847 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:52.576208 kubelet[2547]: I0124 00:53:52.574872 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:52.576208 kubelet[2547]: I0124 00:53:52.574897 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:52.589503 kubelet[2547]: E0124 00:53:52.588061 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:52.594901 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 24 00:53:52.603125 kubelet[2547]: E0124 00:53:52.602995 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:52.706102 kubelet[2547]: I0124 00:53:52.705967 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:52.706839 kubelet[2547]: E0124 00:53:52.706728 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Jan 24 00:53:52.850930 kubelet[2547]: E0124 00:53:52.850181 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.853561 containerd[1614]: time="2026-01-24T00:53:52.853178031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40ed8643d96e527d70b60692f1e07016,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:52.891645 kubelet[2547]: E0124 00:53:52.890190 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.895859 containerd[1614]: time="2026-01-24T00:53:52.895797270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:52.906214 kubelet[2547]: E0124 00:53:52.905814 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.912577 containerd[1614]: time="2026-01-24T00:53:52.909667047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:52.926998 kubelet[2547]: W0124 00:53:52.926116 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:52.926998 kubelet[2547]: E0124 00:53:52.926713 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:52.998240 kubelet[2547]: W0124 00:53:52.995728 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:52.998240 kubelet[2547]: E0124 00:53:52.996837 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:53.114089 kubelet[2547]: I0124 00:53:53.111972 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:53.120780 kubelet[2547]: E0124 00:53:53.119009 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Jan 24 00:53:53.126966 containerd[1614]: time="2026-01-24T00:53:53.124631115Z" level=info msg="connecting to shim 1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a" address="unix:///run/containerd/s/016a1d30176c807e1ca1c723787994f2a7764c171a138f0a8e6559064807f29a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:53:53.127808 containerd[1614]: time="2026-01-24T00:53:53.127768854Z" level=info msg="connecting to shim 28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4" address="unix:///run/containerd/s/dce71b06184b5d10854d57dd1f810eea9910da04b5d89a2e19eb99598585cfbe" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:53:53.129833 containerd[1614]: time="2026-01-24T00:53:53.129808165Z" level=info msg="connecting to shim e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e" address="unix:///run/containerd/s/1e08e25b84f65dc80355168101ee297940f1378b10e822e5e098c60b15656ca9" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:53:53.361891 systemd[1]: Started cri-containerd-e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e.scope - libcontainer container e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e. Jan 24 00:53:53.377879 kubelet[2547]: E0124 00:53:53.377652 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="1.6s" Jan 24 00:53:53.389218 systemd[1]: Started cri-containerd-1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a.scope - libcontainer container 1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a. Jan 24 00:53:53.395766 systemd[1]: Started cri-containerd-28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4.scope - libcontainer container 28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4. Jan 24 00:53:53.433000 audit: BPF prog-id=81 op=LOAD Jan 24 00:53:53.437000 audit: BPF prog-id=82 op=LOAD Jan 24 00:53:53.437000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=2609 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646136326435623363636262636133386537373532326666313836 Jan 24 00:53:53.438000 audit: BPF prog-id=82 op=UNLOAD Jan 24 00:53:53.438000 audit[2636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646136326435623363636262636133386537373532326666313836 Jan 24 00:53:53.449000 audit: BPF prog-id=83 op=LOAD Jan 24 00:53:53.449000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=2609 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646136326435623363636262636133386537373532326666313836 Jan 24 00:53:53.449000 audit: BPF prog-id=84 op=LOAD Jan 24 00:53:53.449000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=2609 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646136326435623363636262636133386537373532326666313836 Jan 24 00:53:53.449000 audit: BPF prog-id=84 op=UNLOAD Jan 24 00:53:53.449000 audit[2636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646136326435623363636262636133386537373532326666313836 Jan 24 00:53:53.449000 audit: BPF prog-id=83 op=UNLOAD Jan 24 00:53:53.449000 audit[2636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646136326435623363636262636133386537373532326666313836 Jan 24 00:53:53.450000 audit: BPF prog-id=85 op=LOAD Jan 24 00:53:53.451000 audit: BPF prog-id=86 op=LOAD Jan 24 00:53:53.451000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2607 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539396534363864643036656130313937356532316235643865343037 Jan 24 00:53:53.451000 audit: BPF prog-id=86 op=UNLOAD Jan 24 00:53:53.451000 audit[2645]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539396534363864643036656130313937356532316235643865343037 Jan 24 00:53:53.449000 audit: BPF prog-id=87 op=LOAD Jan 24 00:53:53.449000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=2609 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646136326435623363636262636133386537373532326666313836 Jan 24 00:53:53.452000 audit: BPF prog-id=88 op=LOAD Jan 24 00:53:53.452000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2607 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539396534363864643036656130313937356532316235643865343037 Jan 24 00:53:53.452000 audit: BPF prog-id=89 op=LOAD Jan 24 00:53:53.452000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2607 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539396534363864643036656130313937356532316235643865343037 Jan 24 00:53:53.452000 audit: BPF prog-id=89 op=UNLOAD Jan 24 00:53:53.452000 audit[2645]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539396534363864643036656130313937356532316235643865343037 Jan 24 00:53:53.452000 audit: BPF prog-id=88 op=UNLOAD Jan 24 00:53:53.452000 audit[2645]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539396534363864643036656130313937356532316235643865343037 Jan 24 00:53:53.452000 audit: BPF prog-id=90 op=LOAD Jan 24 00:53:53.452000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2607 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539396534363864643036656130313937356532316235643865343037 Jan 24 00:53:53.469000 audit: BPF prog-id=91 op=LOAD Jan 24 00:53:53.470000 audit: BPF prog-id=92 op=LOAD Jan 24 00:53:53.470000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2612 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626337363361646262336663643462363330366438666665633436 Jan 24 00:53:53.470000 audit: BPF prog-id=92 op=UNLOAD Jan 24 00:53:53.470000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2612 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626337363361646262336663643462363330366438666665633436 Jan 24 00:53:53.470000 audit: BPF prog-id=93 op=LOAD Jan 24 00:53:53.470000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2612 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626337363361646262336663643462363330366438666665633436 Jan 24 00:53:53.472000 audit: BPF prog-id=94 op=LOAD Jan 24 00:53:53.472000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2612 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626337363361646262336663643462363330366438666665633436 Jan 24 00:53:53.472000 audit: BPF prog-id=94 op=UNLOAD Jan 24 00:53:53.472000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2612 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626337363361646262336663643462363330366438666665633436 Jan 24 00:53:53.472000 audit: BPF prog-id=93 op=UNLOAD Jan 24 00:53:53.472000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2612 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626337363361646262336663643462363330366438666665633436 Jan 24 00:53:53.472000 audit: BPF prog-id=95 op=LOAD Jan 24 00:53:53.472000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2612 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626337363361646262336663643462363330366438666665633436 Jan 24 00:53:53.487652 kubelet[2547]: W0124 00:53:53.486994 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:53.487652 kubelet[2547]: E0124 00:53:53.487060 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:53.659031 containerd[1614]: time="2026-01-24T00:53:53.648756662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40ed8643d96e527d70b60692f1e07016,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a\"" Jan 24 00:53:53.659157 kubelet[2547]: E0124 00:53:53.657069 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:53.663715 kubelet[2547]: W0124 00:53:53.663642 2547 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 24 00:53:53.663797 kubelet[2547]: E0124 00:53:53.663720 2547 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:53.671459 containerd[1614]: time="2026-01-24T00:53:53.670046672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4\"" Jan 24 00:53:53.675629 kubelet[2547]: E0124 00:53:53.675156 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:53.691150 containerd[1614]: time="2026-01-24T00:53:53.690967724Z" level=info msg="CreateContainer within sandbox \"28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:53:53.691762 containerd[1614]: time="2026-01-24T00:53:53.691725661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e\"" Jan 24 00:53:53.694936 containerd[1614]: time="2026-01-24T00:53:53.694242910Z" level=info msg="CreateContainer within sandbox \"1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:53:53.696897 kubelet[2547]: E0124 00:53:53.696238 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:53.706022 containerd[1614]: time="2026-01-24T00:53:53.705862245Z" level=info msg="CreateContainer within sandbox \"e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:53:53.763686 containerd[1614]: time="2026-01-24T00:53:53.763627534Z" level=info msg="Container a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:53:53.794551 containerd[1614]: time="2026-01-24T00:53:53.791038732Z" level=info msg="Container 3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:53:53.809470 containerd[1614]: time="2026-01-24T00:53:53.809224246Z" level=info msg="Container f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:53:53.838914 containerd[1614]: time="2026-01-24T00:53:53.838011132Z" level=info msg="CreateContainer within sandbox \"28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb\"" Jan 24 00:53:53.842214 containerd[1614]: time="2026-01-24T00:53:53.841977507Z" level=info msg="StartContainer for \"a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb\"" Jan 24 00:53:53.845716 containerd[1614]: time="2026-01-24T00:53:53.844832105Z" level=info msg="connecting to shim a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb" address="unix:///run/containerd/s/dce71b06184b5d10854d57dd1f810eea9910da04b5d89a2e19eb99598585cfbe" protocol=ttrpc version=3 Jan 24 00:53:53.845716 containerd[1614]: time="2026-01-24T00:53:53.845039067Z" level=info msg="CreateContainer within sandbox \"1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21\"" Jan 24 00:53:53.848616 containerd[1614]: time="2026-01-24T00:53:53.848582486Z" level=info msg="StartContainer for \"3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21\"" Jan 24 00:53:53.852998 containerd[1614]: time="2026-01-24T00:53:53.852964117Z" level=info msg="connecting to shim 3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21" address="unix:///run/containerd/s/016a1d30176c807e1ca1c723787994f2a7764c171a138f0a8e6559064807f29a" protocol=ttrpc version=3 Jan 24 00:53:53.876491 containerd[1614]: time="2026-01-24T00:53:53.875920617Z" level=info msg="CreateContainer within sandbox \"e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128\"" Jan 24 00:53:53.893661 containerd[1614]: time="2026-01-24T00:53:53.893560330Z" level=info msg="StartContainer for \"f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128\"" Jan 24 00:53:53.904001 containerd[1614]: time="2026-01-24T00:53:53.903790818Z" level=info msg="connecting to shim f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128" address="unix:///run/containerd/s/1e08e25b84f65dc80355168101ee297940f1378b10e822e5e098c60b15656ca9" protocol=ttrpc version=3 Jan 24 00:53:53.928765 systemd[1]: Started cri-containerd-3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21.scope - libcontainer container 3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21. Jan 24 00:53:53.939814 kubelet[2547]: E0124 00:53:53.939122 2547 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:53:53.941112 kubelet[2547]: I0124 00:53:53.940972 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:53.943061 kubelet[2547]: E0124 00:53:53.941619 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Jan 24 00:53:53.954835 systemd[1]: Started cri-containerd-a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb.scope - libcontainer container a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb. Jan 24 00:53:53.973000 audit: BPF prog-id=96 op=LOAD Jan 24 00:53:53.980000 audit: BPF prog-id=97 op=LOAD Jan 24 00:53:53.980000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2609 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363316465363463336133633336636462666235306330646233336163 Jan 24 00:53:53.980000 audit: BPF prog-id=97 op=UNLOAD Jan 24 00:53:53.980000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363316465363463336133633336636462666235306330646233336163 Jan 24 00:53:53.980000 audit: BPF prog-id=98 op=LOAD Jan 24 00:53:53.980000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2609 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363316465363463336133633336636462666235306330646233336163 Jan 24 00:53:53.984000 audit: BPF prog-id=99 op=LOAD Jan 24 00:53:53.984000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2609 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363316465363463336133633336636462666235306330646233336163 Jan 24 00:53:53.984000 audit: BPF prog-id=99 op=UNLOAD Jan 24 00:53:53.984000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363316465363463336133633336636462666235306330646233336163 Jan 24 00:53:53.984000 audit: BPF prog-id=98 op=UNLOAD Jan 24 00:53:53.984000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2609 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363316465363463336133633336636462666235306330646233336163 Jan 24 00:53:53.984000 audit: BPF prog-id=100 op=LOAD Jan 24 00:53:53.984000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2609 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:53.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363316465363463336133633336636462666235306330646233336163 Jan 24 00:53:54.088672 systemd[1]: Started cri-containerd-f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128.scope - libcontainer container f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128. Jan 24 00:53:54.097000 audit: BPF prog-id=101 op=LOAD Jan 24 00:53:54.102000 audit: BPF prog-id=102 op=LOAD Jan 24 00:53:54.102000 audit[2730]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2612 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131646131356634363062326537336166343530396334313065323234 Jan 24 00:53:54.102000 audit: BPF prog-id=102 op=UNLOAD Jan 24 00:53:54.102000 audit[2730]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2612 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131646131356634363062326537336166343530396334313065323234 Jan 24 00:53:54.105000 audit: BPF prog-id=103 op=LOAD Jan 24 00:53:54.105000 audit[2730]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2612 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131646131356634363062326537336166343530396334313065323234 Jan 24 00:53:54.105000 audit: BPF prog-id=104 op=LOAD Jan 24 00:53:54.105000 audit[2730]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2612 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131646131356634363062326537336166343530396334313065323234 Jan 24 00:53:54.105000 audit: BPF prog-id=104 op=UNLOAD Jan 24 00:53:54.105000 audit[2730]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2612 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131646131356634363062326537336166343530396334313065323234 Jan 24 00:53:54.105000 audit: BPF prog-id=103 op=UNLOAD Jan 24 00:53:54.105000 audit[2730]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2612 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131646131356634363062326537336166343530396334313065323234 Jan 24 00:53:54.105000 audit: BPF prog-id=105 op=LOAD Jan 24 00:53:54.105000 audit[2730]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2612 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131646131356634363062326537336166343530396334313065323234 Jan 24 00:53:54.196000 audit: BPF prog-id=106 op=LOAD Jan 24 00:53:54.198000 audit: BPF prog-id=107 op=LOAD Jan 24 00:53:54.198000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015a238 a2=98 a3=0 items=0 ppid=2607 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633316139383632653038623732613233303462616230346634393837 Jan 24 00:53:54.199000 audit: BPF prog-id=107 op=UNLOAD Jan 24 00:53:54.199000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633316139383632653038623732613233303462616230346634393837 Jan 24 00:53:54.199000 audit: BPF prog-id=108 op=LOAD Jan 24 00:53:54.199000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015a488 a2=98 a3=0 items=0 ppid=2607 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633316139383632653038623732613233303462616230346634393837 Jan 24 00:53:54.199000 audit: BPF prog-id=109 op=LOAD Jan 24 00:53:54.199000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00015a218 a2=98 a3=0 items=0 ppid=2607 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633316139383632653038623732613233303462616230346634393837 Jan 24 00:53:54.199000 audit: BPF prog-id=109 op=UNLOAD Jan 24 00:53:54.199000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633316139383632653038623732613233303462616230346634393837 Jan 24 00:53:54.200000 audit: BPF prog-id=108 op=UNLOAD Jan 24 00:53:54.200000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633316139383632653038623732613233303462616230346634393837 Jan 24 00:53:54.200000 audit: BPF prog-id=110 op=LOAD Jan 24 00:53:54.200000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015a6e8 a2=98 a3=0 items=0 ppid=2607 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:54.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633316139383632653038623732613233303462616230346634393837 Jan 24 00:53:54.237841 containerd[1614]: time="2026-01-24T00:53:54.237796105Z" level=info msg="StartContainer for \"3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21\" returns successfully" Jan 24 00:53:54.315923 containerd[1614]: time="2026-01-24T00:53:54.315771977Z" level=info msg="StartContainer for \"a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb\" returns successfully" Jan 24 00:53:54.390210 containerd[1614]: time="2026-01-24T00:53:54.389975678Z" level=info msg="StartContainer for \"f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128\" returns successfully" Jan 24 00:53:54.413613 kubelet[2547]: E0124 00:53:54.413201 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:54.418203 kubelet[2547]: E0124 00:53:54.417543 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:54.423973 kubelet[2547]: E0124 00:53:54.423842 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:54.424100 kubelet[2547]: E0124 00:53:54.424013 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:54.443575 kubelet[2547]: E0124 00:53:54.443216 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:54.443750 kubelet[2547]: E0124 00:53:54.443661 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:55.458226 kubelet[2547]: E0124 00:53:55.458097 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:55.462749 kubelet[2547]: E0124 00:53:55.458478 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:55.462749 kubelet[2547]: E0124 00:53:55.461939 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:55.462749 kubelet[2547]: E0124 00:53:55.462087 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:55.558727 kubelet[2547]: I0124 00:53:55.558418 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:57.706912 kubelet[2547]: E0124 00:53:57.706854 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:57.707788 kubelet[2547]: E0124 00:53:57.707427 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:57.975531 kubelet[2547]: E0124 00:53:57.968989 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:57.975531 kubelet[2547]: E0124 00:53:57.969489 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:58.160680 kubelet[2547]: E0124 00:53:58.159842 2547 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:58.160680 kubelet[2547]: E0124 00:53:58.160687 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:58.502664 kubelet[2547]: E0124 00:53:58.502461 2547 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:53:58.784037 kubelet[2547]: I0124 00:53:58.780863 2547 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:53:58.784037 kubelet[2547]: E0124 00:53:58.780983 2547 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:53:58.873109 kubelet[2547]: I0124 00:53:58.869692 2547 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:58.899862 kubelet[2547]: I0124 00:53:58.899585 2547 apiserver.go:52] "Watching apiserver" Jan 24 00:53:58.911047 kubelet[2547]: E0124 00:53:58.910712 2547 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:58.911047 kubelet[2547]: I0124 00:53:58.910973 2547 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:58.926992 kubelet[2547]: E0124 00:53:58.926664 2547 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:58.926992 kubelet[2547]: I0124 00:53:58.926863 2547 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:58.942089 kubelet[2547]: E0124 00:53:58.939135 2547 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:58.964530 kubelet[2547]: I0124 00:53:58.964455 2547 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:54:06.064758 systemd[1]: Reload requested from client PID 2835 ('systemctl') (unit session-10.scope)... Jan 24 00:54:06.066099 systemd[1]: Reloading... Jan 24 00:54:07.571938 zram_generator::config[2880]: No configuration found. Jan 24 00:54:07.876531 kubelet[2547]: I0124 00:54:07.874879 2547 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:54:08.014901 kubelet[2547]: E0124 00:54:08.014734 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:08.031007 kubelet[2547]: I0124 00:54:08.030115 2547 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:54:08.256815 kubelet[2547]: E0124 00:54:08.235717 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:08.297128 kubelet[2547]: I0124 00:54:08.288763 2547 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:54:08.387918 kubelet[2547]: E0124 00:54:08.387708 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:08.730102 kubelet[2547]: I0124 00:54:08.729586 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.729201073 podStartE2EDuration="1.729201073s" podCreationTimestamp="2026-01-24 00:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:54:08.724837692 +0000 UTC m=+18.605265496" watchObservedRunningTime="2026-01-24 00:54:08.729201073 +0000 UTC m=+18.609628857" Jan 24 00:54:09.535822 kubelet[2547]: E0124 00:54:09.523062 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:09.811774 kubelet[2547]: E0124 00:54:09.658969 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:09.811774 kubelet[2547]: E0124 00:54:09.669999 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:11.929506 kubelet[2547]: E0124 00:54:11.929150 2547 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.11s" Jan 24 00:54:12.722175 kubelet[2547]: I0124 00:54:12.719860 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.719837422 podStartE2EDuration="4.719837422s" podCreationTimestamp="2026-01-24 00:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:54:09.903199142 +0000 UTC m=+19.783626945" watchObservedRunningTime="2026-01-24 00:54:12.719837422 +0000 UTC m=+22.600265205" Jan 24 00:54:13.782072 kubelet[2547]: E0124 00:54:13.781148 2547 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.407s" Jan 24 00:54:14.093905 systemd[1]: Reloading finished in 8023 ms. Jan 24 00:54:14.770572 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:14.900803 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:54:15.004555 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 24 00:54:15.064494 kernel: audit: type=1131 audit(1769216054.955:393): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:14.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:14.931176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:15.018786 systemd[1]: kubelet.service: Consumed 7.958s CPU time, 133.1M memory peak. Jan 24 00:54:15.264059 kernel: audit: type=1334 audit(1769216055.226:394): prog-id=111 op=LOAD Jan 24 00:54:15.226000 audit: BPF prog-id=111 op=LOAD Jan 24 00:54:15.192685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:15.373737 kernel: audit: type=1334 audit(1769216055.227:395): prog-id=72 op=UNLOAD Jan 24 00:54:15.377216 kernel: audit: type=1334 audit(1769216055.229:396): prog-id=112 op=LOAD Jan 24 00:54:15.377459 kernel: audit: type=1334 audit(1769216055.229:397): prog-id=113 op=LOAD Jan 24 00:54:15.377494 kernel: audit: type=1334 audit(1769216055.230:398): prog-id=73 op=UNLOAD Jan 24 00:54:15.377529 kernel: audit: type=1334 audit(1769216055.230:399): prog-id=74 op=UNLOAD Jan 24 00:54:15.227000 audit: BPF prog-id=72 op=UNLOAD Jan 24 00:54:15.229000 audit: BPF prog-id=112 op=LOAD Jan 24 00:54:15.229000 audit: BPF prog-id=113 op=LOAD Jan 24 00:54:15.230000 audit: BPF prog-id=73 op=UNLOAD Jan 24 00:54:15.230000 audit: BPF prog-id=74 op=UNLOAD Jan 24 00:54:15.381678 containerd[1614]: time="2026-01-24T00:54:15.268459850Z" level=error msg="collecting metrics for f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128" error="context canceled" Jan 24 00:54:15.381678 containerd[1614]: time="2026-01-24T00:54:15.378997655Z" level=error msg="ttrpc: received message on inactive stream" stream=21 Jan 24 00:54:15.258000 audit: BPF prog-id=114 op=LOAD Jan 24 00:54:15.413920 kernel: audit: type=1334 audit(1769216055.258:400): prog-id=114 op=LOAD Jan 24 00:54:15.415596 kernel: audit: type=1334 audit(1769216055.258:401): prog-id=64 op=UNLOAD Jan 24 00:54:15.258000 audit: BPF prog-id=64 op=UNLOAD Jan 24 00:54:15.427228 kernel: audit: type=1334 audit(1769216055.260:402): prog-id=115 op=LOAD Jan 24 00:54:15.260000 audit: BPF prog-id=115 op=LOAD Jan 24 00:54:15.260000 audit: BPF prog-id=68 op=UNLOAD Jan 24 00:54:15.262000 audit: BPF prog-id=116 op=LOAD Jan 24 00:54:15.262000 audit: BPF prog-id=75 op=UNLOAD Jan 24 00:54:15.263000 audit: BPF prog-id=117 op=LOAD Jan 24 00:54:15.263000 audit: BPF prog-id=118 op=LOAD Jan 24 00:54:15.263000 audit: BPF prog-id=76 op=UNLOAD Jan 24 00:54:15.263000 audit: BPF prog-id=77 op=UNLOAD Jan 24 00:54:15.267000 audit: BPF prog-id=119 op=LOAD Jan 24 00:54:15.267000 audit: BPF prog-id=61 op=UNLOAD Jan 24 00:54:15.267000 audit: BPF prog-id=120 op=LOAD Jan 24 00:54:15.267000 audit: BPF prog-id=121 op=LOAD Jan 24 00:54:15.268000 audit: BPF prog-id=62 op=UNLOAD Jan 24 00:54:15.268000 audit: BPF prog-id=63 op=UNLOAD Jan 24 00:54:15.291000 audit: BPF prog-id=122 op=LOAD Jan 24 00:54:15.291000 audit: BPF prog-id=78 op=UNLOAD Jan 24 00:54:15.291000 audit: BPF prog-id=123 op=LOAD Jan 24 00:54:15.291000 audit: BPF prog-id=124 op=LOAD Jan 24 00:54:15.291000 audit: BPF prog-id=79 op=UNLOAD Jan 24 00:54:15.291000 audit: BPF prog-id=80 op=UNLOAD Jan 24 00:54:15.303000 audit: BPF prog-id=125 op=LOAD Jan 24 00:54:15.303000 audit: BPF prog-id=65 op=UNLOAD Jan 24 00:54:15.303000 audit: BPF prog-id=126 op=LOAD Jan 24 00:54:15.303000 audit: BPF prog-id=127 op=LOAD Jan 24 00:54:15.305000 audit: BPF prog-id=66 op=UNLOAD Jan 24 00:54:15.305000 audit: BPF prog-id=67 op=UNLOAD Jan 24 00:54:15.319000 audit: BPF prog-id=128 op=LOAD Jan 24 00:54:15.320000 audit: BPF prog-id=69 op=UNLOAD Jan 24 00:54:15.346000 audit: BPF prog-id=129 op=LOAD Jan 24 00:54:15.356000 audit: BPF prog-id=130 op=LOAD Jan 24 00:54:15.360000 audit: BPF prog-id=70 op=UNLOAD Jan 24 00:54:15.368000 audit: BPF prog-id=71 op=UNLOAD Jan 24 00:54:18.337694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:18.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:18.400669 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:54:18.598445 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:54:18.598445 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:54:18.598445 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:54:18.598919 kubelet[2926]: I0124 00:54:18.598786 2926 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:54:18.636473 kubelet[2926]: I0124 00:54:18.635545 2926 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:54:18.636473 kubelet[2926]: I0124 00:54:18.635585 2926 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:54:18.636473 kubelet[2926]: I0124 00:54:18.635936 2926 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:54:18.655512 kubelet[2926]: I0124 00:54:18.653710 2926 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:54:18.796626 kubelet[2926]: I0124 00:54:18.795725 2926 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:54:18.902844 kubelet[2926]: I0124 00:54:18.896700 2926 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:54:19.021573 kubelet[2926]: I0124 00:54:19.020807 2926 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:54:19.024748 kubelet[2926]: I0124 00:54:19.021908 2926 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:54:19.024748 kubelet[2926]: I0124 00:54:19.022180 2926 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:54:19.024748 kubelet[2926]: I0124 00:54:19.023738 2926 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:54:19.024748 kubelet[2926]: I0124 00:54:19.023757 2926 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:54:19.027605 kubelet[2926]: I0124 00:54:19.023832 2926 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:54:19.027605 kubelet[2926]: I0124 00:54:19.024181 2926 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:54:19.027605 kubelet[2926]: I0124 00:54:19.024225 2926 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:54:19.027605 kubelet[2926]: I0124 00:54:19.024479 2926 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:54:19.027605 kubelet[2926]: I0124 00:54:19.024502 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:54:19.240889 kubelet[2926]: I0124 00:54:19.239669 2926 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:54:19.269936 kubelet[2926]: I0124 00:54:19.263748 2926 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:54:19.269936 kubelet[2926]: I0124 00:54:19.269181 2926 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:54:19.281675 kubelet[2926]: I0124 00:54:19.277625 2926 server.go:1287] "Started kubelet" Jan 24 00:54:19.281675 kubelet[2926]: I0124 00:54:19.279716 2926 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:54:19.286717 kubelet[2926]: I0124 00:54:19.283713 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:54:19.286717 kubelet[2926]: I0124 00:54:19.286583 2926 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:54:19.301009 kubelet[2926]: I0124 00:54:19.300712 2926 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:54:19.335861 kubelet[2926]: E0124 00:54:19.334686 2926 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:54:19.344593 kubelet[2926]: I0124 00:54:19.337972 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:54:19.344593 kubelet[2926]: I0124 00:54:19.344485 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:54:19.463551 kubelet[2926]: I0124 00:54:19.446965 2926 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:54:19.562027 kubelet[2926]: I0124 00:54:19.559179 2926 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:54:19.599441 kubelet[2926]: I0124 00:54:19.573924 2926 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:54:19.610026 kubelet[2926]: I0124 00:54:19.609991 2926 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:54:19.616540 kubelet[2926]: I0124 00:54:19.614769 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:54:19.636572 kubelet[2926]: I0124 00:54:19.634956 2926 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:54:19.787845 kubelet[2926]: I0124 00:54:19.787690 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:54:19.805438 kubelet[2926]: I0124 00:54:19.804506 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:54:19.805438 kubelet[2926]: I0124 00:54:19.804560 2926 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:54:19.805438 kubelet[2926]: I0124 00:54:19.804597 2926 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:54:19.805438 kubelet[2926]: I0124 00:54:19.804613 2926 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:54:19.805438 kubelet[2926]: E0124 00:54:19.804714 2926 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:54:19.905706 kubelet[2926]: E0124 00:54:19.905579 2926 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:54:20.009726 kubelet[2926]: I0124 00:54:20.009014 2926 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:54:20.009726 kubelet[2926]: I0124 00:54:20.009037 2926 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:54:20.009726 kubelet[2926]: I0124 00:54:20.009170 2926 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:54:20.009955 kubelet[2926]: I0124 00:54:20.009911 2926 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:54:20.009955 kubelet[2926]: I0124 00:54:20.009929 2926 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:54:20.009955 kubelet[2926]: I0124 00:54:20.009954 2926 policy_none.go:49] "None policy: Start" Jan 24 00:54:20.010192 kubelet[2926]: I0124 00:54:20.009968 2926 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:54:20.010192 kubelet[2926]: I0124 00:54:20.009984 2926 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:54:20.010563 kubelet[2926]: I0124 00:54:20.010457 2926 state_mem.go:75] "Updated machine memory state" Jan 24 00:54:20.030041 kubelet[2926]: I0124 00:54:20.029918 2926 apiserver.go:52] "Watching apiserver" Jan 24 00:54:20.062645 kubelet[2926]: I0124 00:54:20.060832 2926 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:54:20.062645 kubelet[2926]: I0124 00:54:20.061709 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:54:20.062645 kubelet[2926]: I0124 00:54:20.061726 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:54:20.066521 kubelet[2926]: I0124 00:54:20.066022 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:54:20.069778 kubelet[2926]: I0124 00:54:20.069664 2926 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:54:20.070541 kubelet[2926]: E0124 00:54:20.069977 2926 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:54:20.070598 containerd[1614]: time="2026-01-24T00:54:20.070243373Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:54:20.071163 kubelet[2926]: I0124 00:54:20.070635 2926 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:54:20.163218 systemd[1]: Created slice kubepods-besteffort-pod4f7a28e0_1661_4169_837b_259d650169c1.slice - libcontainer container kubepods-besteffort-pod4f7a28e0_1661_4169_837b_259d650169c1.slice. Jan 24 00:54:20.205947 kubelet[2926]: I0124 00:54:20.205436 2926 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:54:20.205947 kubelet[2926]: I0124 00:54:20.205687 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40ed8643d96e527d70b60692f1e07016-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40ed8643d96e527d70b60692f1e07016\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:54:20.251823 kubelet[2926]: I0124 00:54:20.205728 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40ed8643d96e527d70b60692f1e07016-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40ed8643d96e527d70b60692f1e07016\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:54:20.270423 kubelet[2926]: I0124 00:54:20.269227 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40ed8643d96e527d70b60692f1e07016-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40ed8643d96e527d70b60692f1e07016\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:54:20.270423 kubelet[2926]: I0124 00:54:20.269645 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:54:20.270423 kubelet[2926]: I0124 00:54:20.269724 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:54:20.270423 kubelet[2926]: I0124 00:54:20.269767 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f7a28e0-1661-4169-837b-259d650169c1-lib-modules\") pod \"kube-proxy-n6cbl\" (UID: \"4f7a28e0-1661-4169-837b-259d650169c1\") " pod="kube-system/kube-proxy-n6cbl" Jan 24 00:54:20.270423 kubelet[2926]: I0124 00:54:20.269800 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmfz\" (UniqueName: \"kubernetes.io/projected/4f7a28e0-1661-4169-837b-259d650169c1-kube-api-access-sfmfz\") pod \"kube-proxy-n6cbl\" (UID: \"4f7a28e0-1661-4169-837b-259d650169c1\") " pod="kube-system/kube-proxy-n6cbl" Jan 24 00:54:20.270916 kubelet[2926]: I0124 00:54:20.269832 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:54:20.270916 kubelet[2926]: I0124 00:54:20.269865 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:54:20.270916 kubelet[2926]: I0124 00:54:20.269891 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:54:20.270916 kubelet[2926]: I0124 00:54:20.269929 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:54:20.270916 kubelet[2926]: I0124 00:54:20.269963 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f7a28e0-1661-4169-837b-259d650169c1-kube-proxy\") pod \"kube-proxy-n6cbl\" (UID: \"4f7a28e0-1661-4169-837b-259d650169c1\") " pod="kube-system/kube-proxy-n6cbl" Jan 24 00:54:20.271695 kubelet[2926]: I0124 00:54:20.269998 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f7a28e0-1661-4169-837b-259d650169c1-xtables-lock\") pod \"kube-proxy-n6cbl\" (UID: \"4f7a28e0-1661-4169-837b-259d650169c1\") " pod="kube-system/kube-proxy-n6cbl" Jan 24 00:54:20.306181 kubelet[2926]: I0124 00:54:20.304847 2926 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:54:20.392615 kubelet[2926]: I0124 00:54:20.390576 2926 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:54:20.392615 kubelet[2926]: I0124 00:54:20.390778 2926 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:54:20.434230 kubelet[2926]: E0124 00:54:20.431934 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:20.434230 kubelet[2926]: E0124 00:54:20.432888 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:20.436215 kubelet[2926]: E0124 00:54:20.434859 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:20.512485 kubelet[2926]: E0124 00:54:20.510217 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:20.518835 containerd[1614]: time="2026-01-24T00:54:20.517642957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6cbl,Uid:4f7a28e0-1661-4169-837b-259d650169c1,Namespace:kube-system,Attempt:0,}" Jan 24 00:54:20.753086 containerd[1614]: time="2026-01-24T00:54:20.751918327Z" level=info msg="connecting to shim ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47" address="unix:///run/containerd/s/9a54daa733932d75516c5cf9867bd92299411cd08f88d47e08c5a1e50693ddb5" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:54:20.928795 kubelet[2926]: E0124 00:54:20.925944 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:20.928795 kubelet[2926]: E0124 00:54:20.928770 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:21.190932 systemd[1]: Started cri-containerd-ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47.scope - libcontainer container ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47. Jan 24 00:54:21.324000 audit: BPF prog-id=131 op=LOAD Jan 24 00:54:21.345589 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 24 00:54:21.346114 kernel: audit: type=1334 audit(1769216061.324:435): prog-id=131 op=LOAD Jan 24 00:54:21.326000 audit: BPF prog-id=132 op=LOAD Jan 24 00:54:21.326000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.385787 kernel: audit: type=1334 audit(1769216061.326:436): prog-id=132 op=LOAD Jan 24 00:54:21.385881 kernel: audit: type=1300 audit(1769216061.326:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.502125 kernel: audit: type=1327 audit(1769216061.326:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.326000 audit: BPF prog-id=132 op=UNLOAD Jan 24 00:54:21.524822 kernel: audit: type=1334 audit(1769216061.326:437): prog-id=132 op=UNLOAD Jan 24 00:54:21.599063 kernel: audit: type=1300 audit(1769216061.326:437): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.326000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.669753 kernel: audit: type=1327 audit(1769216061.326:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.330000 audit: BPF prog-id=133 op=LOAD Jan 24 00:54:21.686519 kernel: audit: type=1334 audit(1769216061.330:438): prog-id=133 op=LOAD Jan 24 00:54:21.330000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.729801 kernel: audit: type=1300 audit(1769216061.330:438): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.729930 kernel: audit: type=1327 audit(1769216061.330:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.330000 audit: BPF prog-id=134 op=LOAD Jan 24 00:54:21.330000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.330000 audit: BPF prog-id=134 op=UNLOAD Jan 24 00:54:21.330000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.330000 audit: BPF prog-id=133 op=UNLOAD Jan 24 00:54:21.330000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.330000 audit: BPF prog-id=135 op=LOAD Jan 24 00:54:21.330000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2980 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:21.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564386635363864313935363132623663396165323032386138646231 Jan 24 00:54:21.795801 containerd[1614]: time="2026-01-24T00:54:21.794825096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6cbl,Uid:4f7a28e0-1661-4169-837b-259d650169c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47\"" Jan 24 00:54:21.798604 kubelet[2926]: E0124 00:54:21.797735 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:21.814571 containerd[1614]: time="2026-01-24T00:54:21.813022757Z" level=info msg="CreateContainer within sandbox \"ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:54:21.927623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154471277.mount: Deactivated successfully. Jan 24 00:54:21.951613 containerd[1614]: time="2026-01-24T00:54:21.951068915Z" level=info msg="Container 732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:54:21.968760 kubelet[2926]: E0124 00:54:21.968689 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:21.985663 kubelet[2926]: E0124 00:54:21.985628 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:22.004753 containerd[1614]: time="2026-01-24T00:54:22.004699689Z" level=info msg="CreateContainer within sandbox \"ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85\"" Jan 24 00:54:22.017563 containerd[1614]: time="2026-01-24T00:54:22.016707243Z" level=info msg="StartContainer for \"732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85\"" Jan 24 00:54:22.082146 containerd[1614]: time="2026-01-24T00:54:22.082005925Z" level=info msg="connecting to shim 732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85" address="unix:///run/containerd/s/9a54daa733932d75516c5cf9867bd92299411cd08f88d47e08c5a1e50693ddb5" protocol=ttrpc version=3 Jan 24 00:54:22.285078 systemd[1]: Started cri-containerd-732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85.scope - libcontainer container 732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85. Jan 24 00:54:22.513000 audit: BPF prog-id=136 op=LOAD Jan 24 00:54:22.513000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2980 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:22.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733323035386633613639306137393363646236323363373738613430 Jan 24 00:54:22.514000 audit: BPF prog-id=137 op=LOAD Jan 24 00:54:22.514000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2980 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733323035386633613639306137393363646236323363373738613430 Jan 24 00:54:22.514000 audit: BPF prog-id=137 op=UNLOAD Jan 24 00:54:22.514000 audit[3018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733323035386633613639306137393363646236323363373738613430 Jan 24 00:54:22.516000 audit: BPF prog-id=136 op=UNLOAD Jan 24 00:54:22.516000 audit[3018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2980 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:22.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733323035386633613639306137393363646236323363373738613430 Jan 24 00:54:22.516000 audit: BPF prog-id=138 op=LOAD Jan 24 00:54:22.516000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2980 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:22.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733323035386633613639306137393363646236323363373738613430 Jan 24 00:54:22.860803 containerd[1614]: time="2026-01-24T00:54:22.853024923Z" level=info msg="StartContainer for \"732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85\" returns successfully" Jan 24 00:54:23.007612 kubelet[2926]: E0124 00:54:23.006727 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:23.010537 kubelet[2926]: E0124 00:54:23.009915 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:24.011894 kubelet[2926]: E0124 00:54:24.010500 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:24.301017 kubelet[2926]: I0124 00:54:24.300128 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n6cbl" podStartSLOduration=5.300100117 podStartE2EDuration="5.300100117s" podCreationTimestamp="2026-01-24 00:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:54:23.063549953 +0000 UTC m=+4.642779696" watchObservedRunningTime="2026-01-24 00:54:24.300100117 +0000 UTC m=+5.879329879" Jan 24 00:54:24.333113 systemd[1]: Created slice kubepods-besteffort-pod7a85b2a3_48bd_4881_ba31_33b577d3c774.slice - libcontainer container kubepods-besteffort-pod7a85b2a3_48bd_4881_ba31_33b577d3c774.slice. Jan 24 00:54:24.394745 kubelet[2926]: I0124 00:54:24.394135 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vsz\" (UniqueName: \"kubernetes.io/projected/7a85b2a3-48bd-4881-ba31-33b577d3c774-kube-api-access-c2vsz\") pod \"tigera-operator-7dcd859c48-p5hq9\" (UID: \"7a85b2a3-48bd-4881-ba31-33b577d3c774\") " pod="tigera-operator/tigera-operator-7dcd859c48-p5hq9" Jan 24 00:54:24.399610 kubelet[2926]: I0124 00:54:24.398656 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a85b2a3-48bd-4881-ba31-33b577d3c774-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p5hq9\" (UID: \"7a85b2a3-48bd-4881-ba31-33b577d3c774\") " pod="tigera-operator/tigera-operator-7dcd859c48-p5hq9" Jan 24 00:54:24.680193 containerd[1614]: time="2026-01-24T00:54:24.679229952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p5hq9,Uid:7a85b2a3-48bd-4881-ba31-33b577d3c774,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:54:24.933749 containerd[1614]: time="2026-01-24T00:54:24.933122961Z" level=info msg="connecting to shim 953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134" address="unix:///run/containerd/s/6c07ba2a9a8cecf45cd1c57aca89508f5ab9e8d17886329b6deb41c46e6be088" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:54:25.006000 audit[3107]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:25.006000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc61e2bf10 a2=0 a3=7ffc61e2befc items=0 ppid=3030 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:54:25.049973 kubelet[2926]: E0124 00:54:25.048080 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:25.064000 audit[3103]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.064000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeef858830 a2=0 a3=7ffeef85881c items=0 ppid=3030 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:54:25.097000 audit[3113]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:25.097000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd257add0 a2=0 a3=7fffd257adbc items=0 ppid=3030 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:54:25.107000 audit[3118]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.111000 audit[3124]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:25.111000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3e31acd0 a2=0 a3=7ffd3e31acbc items=0 ppid=3030 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.111000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:54:25.107000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1375a790 a2=0 a3=7ffe1375a77c items=0 ppid=3030 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:54:25.143000 audit[3125]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.143000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebb08a2d0 a2=0 a3=7ffebb08a2bc items=0 ppid=3030 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:54:25.173044 systemd[1]: Started cri-containerd-953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134.scope - libcontainer container 953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134. Jan 24 00:54:25.178000 audit[3128]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.178000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffec12cde40 a2=0 a3=7ffec12cde2c items=0 ppid=3030 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.178000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:54:25.208000 audit[3130]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.208000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe926ba400 a2=0 a3=7ffe926ba3ec items=0 ppid=3030 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 24 00:54:25.266000 audit[3142]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.266000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc0c5a5490 a2=0 a3=7ffc0c5a547c items=0 ppid=3030 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 24 00:54:25.283000 audit[3143]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.283000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0b0acc30 a2=0 a3=7fff0b0acc1c items=0 ppid=3030 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:54:25.298000 audit: BPF prog-id=139 op=LOAD Jan 24 00:54:25.301000 audit: BPF prog-id=140 op=LOAD Jan 24 00:54:25.301000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935336537366331646434323338386565653033363937623133373334 Jan 24 00:54:25.302000 audit: BPF prog-id=140 op=UNLOAD Jan 24 00:54:25.302000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935336537366331646434323338386565653033363937623133373334 Jan 24 00:54:25.304000 audit: BPF prog-id=141 op=LOAD Jan 24 00:54:25.304000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935336537366331646434323338386565653033363937623133373334 Jan 24 00:54:25.304000 audit: BPF prog-id=142 op=LOAD Jan 24 00:54:25.304000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935336537366331646434323338386565653033363937623133373334 Jan 24 00:54:25.304000 audit: BPF prog-id=142 op=UNLOAD Jan 24 00:54:25.304000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935336537366331646434323338386565653033363937623133373334 Jan 24 00:54:25.304000 audit: BPF prog-id=141 op=UNLOAD Jan 24 00:54:25.304000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935336537366331646434323338386565653033363937623133373334 Jan 24 00:54:25.304000 audit: BPF prog-id=143 op=LOAD Jan 24 00:54:25.304000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935336537366331646434323338386565653033363937623133373334 Jan 24 00:54:25.314000 audit[3145]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.314000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc99d7b5c0 a2=0 a3=7ffc99d7b5ac items=0 ppid=3030 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:54:25.340000 audit[3147]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.340000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdebd257a0 a2=0 a3=7ffdebd2578c items=0 ppid=3030 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:54:25.365000 audit[3149]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.365000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff13c7c280 a2=0 a3=7fff13c7c26c items=0 ppid=3030 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:54:25.413000 audit[3152]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.413000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd3bdd8020 a2=0 a3=7ffd3bdd800c items=0 ppid=3030 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 24 00:54:25.429000 audit[3153]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.429000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde133c0f0 a2=0 a3=7ffde133c0dc items=0 ppid=3030 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:54:25.471000 audit[3155]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.471000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf631d7d0 a2=0 a3=7ffdf631d7bc items=0 ppid=3030 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:54:25.483000 audit[3157]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.483000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff731e2fa0 a2=0 a3=7fff731e2f8c items=0 ppid=3030 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:54:25.505000 audit[3164]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.505000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce67937c0 a2=0 a3=7ffce67937ac items=0 ppid=3030 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:54:25.526185 containerd[1614]: time="2026-01-24T00:54:25.526001699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p5hq9,Uid:7a85b2a3-48bd-4881-ba31-33b577d3c774,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134\"" Jan 24 00:54:25.536885 containerd[1614]: time="2026-01-24T00:54:25.536817109Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:54:25.562000 audit[3167]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.562000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed6c3f8b0 a2=0 a3=7ffed6c3f89c items=0 ppid=3030 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:54:25.610000 audit[3170]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.610000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4015c3e0 a2=0 a3=7ffe4015c3cc items=0 ppid=3030 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:54:25.647000 audit[3171]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.647000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc57fc5640 a2=0 a3=7ffc57fc562c items=0 ppid=3030 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:54:25.669000 audit[3173]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.669000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffeec697b10 a2=0 a3=7ffeec697afc items=0 ppid=3030 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:54:25.780000 audit[3176]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.780000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec1e2b250 a2=0 a3=7ffec1e2b23c items=0 ppid=3030 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:54:25.807000 audit[3177]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.807000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd02833c00 a2=0 a3=7ffd02833bec items=0 ppid=3030 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:54:25.878000 audit[3179]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:54:25.878000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdb3e7efe0 a2=0 a3=7ffdb3e7efcc items=0 ppid=3030 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:25.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:54:26.136000 audit[3185]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:26.136000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffa976adf0 a2=0 a3=7fffa976addc items=0 ppid=3030 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.136000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:26.182000 audit[3185]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:26.182000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffa976adf0 a2=0 a3=7fffa976addc items=0 ppid=3030 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:26.202000 audit[3192]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.202000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe4fbea230 a2=0 a3=7ffe4fbea21c items=0 ppid=3030 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.202000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:54:26.225000 audit[3194]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.225000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc75b1a7c0 a2=0 a3=7ffc75b1a7ac items=0 ppid=3030 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 24 00:54:26.280000 audit[3197]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.280000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeed61c3a0 a2=0 a3=7ffeed61c38c items=0 ppid=3030 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 24 00:54:26.294000 audit[3198]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.294000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1991db40 a2=0 a3=7ffd1991db2c items=0 ppid=3030 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:54:26.319000 audit[3200]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.319000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffddcbab770 a2=0 a3=7ffddcbab75c items=0 ppid=3030 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:54:26.327000 audit[3201]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.351800 kernel: kauditd_printk_skb: 145 callbacks suppressed Jan 24 00:54:26.351902 kernel: audit: type=1325 audit(1769216066.327:488): table=filter:86 family=10 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.327000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc37f10080 a2=0 a3=7ffc37f1006c items=0 ppid=3030 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.424692 kernel: audit: type=1300 audit(1769216066.327:488): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc37f10080 a2=0 a3=7ffc37f1006c items=0 ppid=3030 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.433833 kernel: audit: type=1327 audit(1769216066.327:488): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:54:26.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:54:26.389000 audit[3203]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.502818 kernel: audit: type=1325 audit(1769216066.389:489): table=filter:87 family=10 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.502893 kernel: audit: type=1300 audit(1769216066.389:489): arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbd760cb0 a2=0 a3=7ffcbd760c9c items=0 ppid=3030 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.389000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbd760cb0 a2=0 a3=7ffcbd760c9c items=0 ppid=3030 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 24 00:54:26.643833 kernel: audit: type=1327 audit(1769216066.389:489): proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 24 00:54:26.647862 kernel: audit: type=1325 audit(1769216066.420:490): table=filter:88 family=10 entries=2 op=nft_register_chain pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.420000 audit[3206]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.420000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe7d60ee30 a2=0 a3=7ffe7d60ee1c items=0 ppid=3030 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.735957 kernel: audit: type=1300 audit(1769216066.420:490): arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe7d60ee30 a2=0 a3=7ffe7d60ee1c items=0 ppid=3030 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:54:26.439000 audit[3207]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.812748 kernel: audit: type=1327 audit(1769216066.420:490): proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:54:26.812864 kernel: audit: type=1325 audit(1769216066.439:491): table=filter:89 family=10 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.439000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce9e85440 a2=0 a3=7ffce9e8542c items=0 ppid=3030 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:54:26.501000 audit[3209]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.501000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff21dfc520 a2=0 a3=7fff21dfc50c items=0 ppid=3030 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:54:26.553000 audit[3210]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.553000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1355a0d0 a2=0 a3=7ffe1355a0bc items=0 ppid=3030 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:54:26.684000 audit[3212]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.684000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbf5c5000 a2=0 a3=7ffcbf5c4fec items=0 ppid=3030 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:54:26.729000 audit[3217]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.729000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd69552130 a2=0 a3=7ffd6955211c items=0 ppid=3030 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.729000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:54:26.819000 audit[3222]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.819000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda949b050 a2=0 a3=7ffda949b03c items=0 ppid=3030 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 24 00:54:26.833000 audit[3223]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.833000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca97aac00 a2=0 a3=7ffca97aabec items=0 ppid=3030 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:54:26.858000 audit[3225]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.858000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd05176280 a2=0 a3=7ffd0517626c items=0 ppid=3030 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:54:26.896000 audit[3228]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.896000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0b0d1ac0 a2=0 a3=7ffd0b0d1aac items=0 ppid=3030 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:54:26.905000 audit[3229]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.905000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe885630e0 a2=0 a3=7ffe885630cc items=0 ppid=3030 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:54:26.911581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848508049.mount: Deactivated successfully. Jan 24 00:54:26.936000 audit[3231]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.936000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffd8d8a2c0 a2=0 a3=7fffd8d8a2ac items=0 ppid=3030 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:54:26.958000 audit[3232]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:26.958000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9136d530 a2=0 a3=7fff9136d51c items=0 ppid=3030 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:26.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:54:27.004000 audit[3234]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.004000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc4c92f4c0 a2=0 a3=7ffc4c92f4ac items=0 ppid=3030 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:54:27.056000 audit[3237]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:54:27.056000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc97a370f0 a2=0 a3=7ffc97a370dc items=0 ppid=3030 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:54:27.090000 audit[3239]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:54:27.090000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd2d3de9c0 a2=0 a3=7ffd2d3de9ac items=0 ppid=3030 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.090000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:27.092000 audit[3239]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:54:27.092000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd2d3de9c0 a2=0 a3=7ffd2d3de9ac items=0 ppid=3030 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:27.092000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:28.385077 kubelet[2926]: E0124 00:54:28.384986 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:29.098854 kubelet[2926]: E0124 00:54:29.097922 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:35.078705 kubelet[2926]: E0124 00:54:35.077134 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:35.505755 containerd[1614]: time="2026-01-24T00:54:35.504783873Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:35.522419 containerd[1614]: time="2026-01-24T00:54:35.521693328Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 24 00:54:35.540187 containerd[1614]: time="2026-01-24T00:54:35.538595801Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:35.555953 containerd[1614]: time="2026-01-24T00:54:35.555632005Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:35.557293 containerd[1614]: time="2026-01-24T00:54:35.556585109Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 10.019722213s" Jan 24 00:54:35.557293 containerd[1614]: time="2026-01-24T00:54:35.556627350Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:54:35.573754 containerd[1614]: time="2026-01-24T00:54:35.573216458Z" level=info msg="CreateContainer within sandbox \"953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:54:35.659792 containerd[1614]: time="2026-01-24T00:54:35.659034906Z" level=info msg="Container 5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:54:35.665594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690470088.mount: Deactivated successfully. Jan 24 00:54:35.693113 containerd[1614]: time="2026-01-24T00:54:35.692621529Z" level=info msg="CreateContainer within sandbox \"953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad\"" Jan 24 00:54:35.697219 containerd[1614]: time="2026-01-24T00:54:35.696787427Z" level=info msg="StartContainer for \"5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad\"" Jan 24 00:54:35.698997 containerd[1614]: time="2026-01-24T00:54:35.698747346Z" level=info msg="connecting to shim 5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad" address="unix:///run/containerd/s/6c07ba2a9a8cecf45cd1c57aca89508f5ab9e8d17886329b6deb41c46e6be088" protocol=ttrpc version=3 Jan 24 00:54:35.962588 systemd[1]: Started cri-containerd-5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad.scope - libcontainer container 5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad. Jan 24 00:54:36.057518 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 24 00:54:36.057680 kernel: audit: type=1334 audit(1769216076.043:507): prog-id=144 op=LOAD Jan 24 00:54:36.043000 audit: BPF prog-id=144 op=LOAD Jan 24 00:54:36.085679 kernel: audit: type=1334 audit(1769216076.065:508): prog-id=145 op=LOAD Jan 24 00:54:36.065000 audit: BPF prog-id=145 op=LOAD Jan 24 00:54:36.065000 audit[3247]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c2238 a2=98 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.148592 kernel: audit: type=1300 audit(1769216076.065:508): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c2238 a2=98 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.065000 audit: BPF prog-id=145 op=UNLOAD Jan 24 00:54:36.218823 kernel: audit: type=1327 audit(1769216076.065:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.219779 kernel: audit: type=1334 audit(1769216076.065:509): prog-id=145 op=UNLOAD Jan 24 00:54:36.065000 audit[3247]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.295493 kernel: audit: type=1300 audit(1769216076.065:509): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.296057 kernel: audit: type=1327 audit(1769216076.065:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.070000 audit: BPF prog-id=146 op=LOAD Jan 24 00:54:36.349598 kernel: audit: type=1334 audit(1769216076.070:510): prog-id=146 op=LOAD Jan 24 00:54:36.350033 kernel: audit: type=1300 audit(1769216076.070:510): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c2488 a2=98 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.070000 audit[3247]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c2488 a2=98 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.392232 kernel: audit: type=1327 audit(1769216076.070:510): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.399778 containerd[1614]: time="2026-01-24T00:54:36.399669102Z" level=info msg="StartContainer for \"5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad\" returns successfully" Jan 24 00:54:36.070000 audit: BPF prog-id=147 op=LOAD Jan 24 00:54:36.070000 audit[3247]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001c2218 a2=98 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.070000 audit: BPF prog-id=147 op=UNLOAD Jan 24 00:54:36.070000 audit[3247]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.070000 audit: BPF prog-id=146 op=UNLOAD Jan 24 00:54:36.070000 audit[3247]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:36.070000 audit: BPF prog-id=148 op=LOAD Jan 24 00:54:36.070000 audit[3247]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c26e8 a2=98 a3=0 items=0 ppid=3097 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373264353930323339633765343563386562643164366262356563 Jan 24 00:54:49.405643 sudo[1848]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:49.482778 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 24 00:54:49.483177 kernel: audit: type=1106 audit(1769216089.408:515): pid=1848 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:49.408000 audit[1848]: USER_END pid=1848 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:49.483822 sshd[1847]: Connection closed by 10.0.0.1 port 34506 Jan 24 00:54:49.483902 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:49.410000 audit[1848]: CRED_DISP pid=1848 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:49.549838 kernel: audit: type=1104 audit(1769216089.410:516): pid=1848 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:54:49.583000 kernel: audit: type=1106 audit(1769216089.500:517): pid=1843 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:49.500000 audit[1843]: USER_END pid=1843 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:49.539087 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:54:49.545023 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:34506.service: Deactivated successfully. Jan 24 00:54:49.567938 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:54:49.569173 systemd[1]: session-10.scope: Consumed 14.752s CPU time, 219M memory peak. Jan 24 00:54:49.500000 audit[1843]: CRED_DISP pid=1843 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:49.658134 kernel: audit: type=1104 audit(1769216089.500:518): pid=1843 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:54:49.614645 systemd-logind[1597]: Removed session 10. Jan 24 00:54:49.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.105:22-10.0.0.1:34506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:49.711140 kernel: audit: type=1131 audit(1769216089.545:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.105:22-10.0.0.1:34506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:54:57.354000 audit[3341]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.386527 kernel: audit: type=1325 audit(1769216097.354:520): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.354000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdf7bdd270 a2=0 a3=7ffdf7bdd25c items=0 ppid=3030 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:57.478668 kernel: audit: type=1300 audit(1769216097.354:520): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdf7bdd270 a2=0 a3=7ffdf7bdd25c items=0 ppid=3030 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:57.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:57.515192 kernel: audit: type=1327 audit(1769216097.354:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:57.515610 kernel: audit: type=1325 audit(1769216097.402:521): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.402000 audit[3341]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.402000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdf7bdd270 a2=0 a3=0 items=0 ppid=3030 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:57.614040 kernel: audit: type=1300 audit(1769216097.402:521): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdf7bdd270 a2=0 a3=0 items=0 ppid=3030 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:57.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:57.658594 kernel: audit: type=1327 audit(1769216097.402:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:57.590000 audit[3343]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.698073 kernel: audit: type=1325 audit(1769216097.590:522): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.590000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffcfb2ff50 a2=0 a3=7fffcfb2ff3c items=0 ppid=3030 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:57.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:57.799484 kernel: audit: type=1300 audit(1769216097.590:522): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffcfb2ff50 a2=0 a3=7fffcfb2ff3c items=0 ppid=3030 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:57.799656 kernel: audit: type=1327 audit(1769216097.590:522): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:54:57.799706 kernel: audit: type=1325 audit(1769216097.696:523): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.696000 audit[3343]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:54:57.696000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcfb2ff50 a2=0 a3=0 items=0 ppid=3030 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:54:57.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:08.820393 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 24 00:55:08.820622 kernel: audit: type=1325 audit(1769216108.777:524): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:08.777000 audit[3345]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:08.869851 kernel: audit: type=1300 audit(1769216108.777:524): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcc1f8aba0 a2=0 a3=7ffcc1f8ab8c items=0 ppid=3030 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:08.777000 audit[3345]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcc1f8aba0 a2=0 a3=7ffcc1f8ab8c items=0 ppid=3030 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:08.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:08.906433 kernel: audit: type=1327 audit(1769216108.777:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:08.952115 kernel: audit: type=1325 audit(1769216108.922:525): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:08.922000 audit[3345]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:08.922000 audit[3345]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc1f8aba0 a2=0 a3=0 items=0 ppid=3030 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:08.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:09.023761 kernel: audit: type=1300 audit(1769216108.922:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc1f8aba0 a2=0 a3=0 items=0 ppid=3030 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:09.023840 kernel: audit: type=1327 audit(1769216108.922:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:09.117000 audit[3347]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:09.157791 kernel: audit: type=1325 audit(1769216109.117:526): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:09.117000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffc08d5860 a2=0 a3=7fffc08d584c items=0 ppid=3030 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:09.206502 kernel: audit: type=1300 audit(1769216109.117:526): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffc08d5860 a2=0 a3=7fffc08d584c items=0 ppid=3030 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:09.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:09.207000 audit[3347]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:09.263807 kernel: audit: type=1327 audit(1769216109.117:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:09.263991 kernel: audit: type=1325 audit(1769216109.207:527): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:09.207000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc08d5860 a2=0 a3=0 items=0 ppid=3030 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:09.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:10.078000 audit[3349]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:10.078000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1856d020 a2=0 a3=7ffe1856d00c items=0 ppid=3030 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:10.078000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:10.089000 audit[3349]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:10.089000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1856d020 a2=0 a3=0 items=0 ppid=3030 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:10.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:11.261000 audit[3351]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:11.261000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe2ab13910 a2=0 a3=7ffe2ab138fc items=0 ppid=3030 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:11.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:11.270000 audit[3351]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:11.270000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe2ab13910 a2=0 a3=0 items=0 ppid=3030 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:11.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.153000 audit[3353]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.180725 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 24 00:55:14.181027 kernel: audit: type=1325 audit(1769216114.153:532): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.153000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd8ce45d90 a2=0 a3=7ffd8ce45d7c items=0 ppid=3030 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:14.383819 kernel: audit: type=1300 audit(1769216114.153:532): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd8ce45d90 a2=0 a3=7ffd8ce45d7c items=0 ppid=3030 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:14.383992 kernel: audit: type=1327 audit(1769216114.153:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.272000 audit[3353]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.431705 kernel: audit: type=1325 audit(1769216114.272:533): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.431930 kubelet[2926]: I0124 00:55:14.428722 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p5hq9" podStartSLOduration=40.402355499 podStartE2EDuration="50.428703547s" podCreationTimestamp="2026-01-24 00:54:24 +0000 UTC" firstStartedPulling="2026-01-24 00:54:25.531884446 +0000 UTC m=+7.111114179" lastFinishedPulling="2026-01-24 00:54:35.558232494 +0000 UTC m=+17.137462227" observedRunningTime="2026-01-24 00:54:37.441801543 +0000 UTC m=+19.021031276" watchObservedRunningTime="2026-01-24 00:55:14.428703547 +0000 UTC m=+56.007933279" Jan 24 00:55:14.272000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8ce45d90 a2=0 a3=0 items=0 ppid=3030 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:14.537732 kernel: audit: type=1300 audit(1769216114.272:533): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8ce45d90 a2=0 a3=0 items=0 ppid=3030 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:14.537877 kernel: audit: type=1327 audit(1769216114.272:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.516912 systemd[1]: Created slice kubepods-besteffort-podc68d4d18_aae8_46b6_971b_e0ce5b3573ac.slice - libcontainer container kubepods-besteffort-podc68d4d18_aae8_46b6_971b_e0ce5b3573ac.slice. Jan 24 00:55:14.538709 kubelet[2926]: I0124 00:55:14.513788 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c68d4d18-aae8-46b6-971b-e0ce5b3573ac-typha-certs\") pod \"calico-typha-6c8bfc665b-tb4px\" (UID: \"c68d4d18-aae8-46b6-971b-e0ce5b3573ac\") " pod="calico-system/calico-typha-6c8bfc665b-tb4px" Jan 24 00:55:14.538709 kubelet[2926]: I0124 00:55:14.513847 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhckp\" (UniqueName: \"kubernetes.io/projected/c68d4d18-aae8-46b6-971b-e0ce5b3573ac-kube-api-access-qhckp\") pod \"calico-typha-6c8bfc665b-tb4px\" (UID: \"c68d4d18-aae8-46b6-971b-e0ce5b3573ac\") " pod="calico-system/calico-typha-6c8bfc665b-tb4px" Jan 24 00:55:14.538709 kubelet[2926]: I0124 00:55:14.513880 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68d4d18-aae8-46b6-971b-e0ce5b3573ac-tigera-ca-bundle\") pod \"calico-typha-6c8bfc665b-tb4px\" (UID: \"c68d4d18-aae8-46b6-971b-e0ce5b3573ac\") " pod="calico-system/calico-typha-6c8bfc665b-tb4px" Jan 24 00:55:14.816486 kernel: audit: type=1325 audit(1769216114.785:534): table=filter:119 family=2 entries=22 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.785000 audit[3356]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.785000 audit[3356]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd1cddf340 a2=0 a3=7ffd1cddf32c items=0 ppid=3030 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:14.919709 kernel: audit: type=1300 audit(1769216114.785:534): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd1cddf340 a2=0 a3=7ffd1cddf32c items=0 ppid=3030 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:14.919907 kernel: audit: type=1327 audit(1769216114.785:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.919950 kernel: audit: type=1325 audit(1769216114.822:535): table=nat:120 family=2 entries=12 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.785000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.822000 audit[3356]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:14.822000 audit[3356]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1cddf340 a2=0 a3=0 items=0 ppid=3030 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:14.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:14.960849 kubelet[2926]: E0124 00:55:14.960809 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:14.963445 containerd[1614]: time="2026-01-24T00:55:14.963117205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8bfc665b-tb4px,Uid:c68d4d18-aae8-46b6-971b-e0ce5b3573ac,Namespace:calico-system,Attempt:0,}" Jan 24 00:55:15.166702 kubelet[2926]: I0124 00:55:15.161984 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-cni-net-dir\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.166702 kubelet[2926]: I0124 00:55:15.162037 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-var-lib-calico\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.166702 kubelet[2926]: I0124 00:55:15.162059 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-var-run-calico\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.166702 kubelet[2926]: I0124 00:55:15.162079 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-xtables-lock\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.166702 kubelet[2926]: I0124 00:55:15.162137 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-flexvol-driver-host\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.167036 kubelet[2926]: I0124 00:55:15.162158 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4dh\" (UniqueName: \"kubernetes.io/projected/23ef921d-4c70-49e8-a595-d8305d10d0a4-kube-api-access-fk4dh\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.167036 kubelet[2926]: I0124 00:55:15.162180 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23ef921d-4c70-49e8-a595-d8305d10d0a4-node-certs\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.167036 kubelet[2926]: I0124 00:55:15.162202 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-cni-bin-dir\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.167036 kubelet[2926]: I0124 00:55:15.162228 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-lib-modules\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.167036 kubelet[2926]: I0124 00:55:15.162514 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-cni-log-dir\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.167490 kubelet[2926]: I0124 00:55:15.162543 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23ef921d-4c70-49e8-a595-d8305d10d0a4-policysync\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.167490 kubelet[2926]: I0124 00:55:15.162563 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23ef921d-4c70-49e8-a595-d8305d10d0a4-tigera-ca-bundle\") pod \"calico-node-6cvqh\" (UID: \"23ef921d-4c70-49e8-a595-d8305d10d0a4\") " pod="calico-system/calico-node-6cvqh" Jan 24 00:55:15.208817 systemd[1]: Created slice kubepods-besteffort-pod23ef921d_4c70_49e8_a595_d8305d10d0a4.slice - libcontainer container kubepods-besteffort-pod23ef921d_4c70_49e8_a595_d8305d10d0a4.slice. Jan 24 00:55:15.263670 containerd[1614]: time="2026-01-24T00:55:15.263607903Z" level=info msg="connecting to shim cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4" address="unix:///run/containerd/s/a927722b3d69e2ccd80eb8f823df61cfe0472c38db31feaacbdd14688b934e85" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:55:15.278059 kubelet[2926]: E0124 00:55:15.277677 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.278632 kubelet[2926]: W0124 00:55:15.278599 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.279701 kubelet[2926]: E0124 00:55:15.279671 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.286075 kubelet[2926]: E0124 00:55:15.286051 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.286208 kubelet[2926]: W0124 00:55:15.286185 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.288508 kubelet[2926]: E0124 00:55:15.286582 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.300978 kubelet[2926]: E0124 00:55:15.299806 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.301129 kubelet[2926]: W0124 00:55:15.301104 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.301233 kubelet[2926]: E0124 00:55:15.301211 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.321803 kubelet[2926]: E0124 00:55:15.321772 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.322493 kubelet[2926]: W0124 00:55:15.322242 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.324500 kubelet[2926]: E0124 00:55:15.324463 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.325891 kubelet[2926]: E0124 00:55:15.325870 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.326006 kubelet[2926]: W0124 00:55:15.325988 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.326215 kubelet[2926]: E0124 00:55:15.326197 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.328220 kubelet[2926]: E0124 00:55:15.328200 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.329057 kubelet[2926]: W0124 00:55:15.329036 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.329585 kubelet[2926]: E0124 00:55:15.329564 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.333619 kubelet[2926]: E0124 00:55:15.333600 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.333727 kubelet[2926]: W0124 00:55:15.333710 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.333859 kubelet[2926]: E0124 00:55:15.333843 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.338887 kubelet[2926]: E0124 00:55:15.338868 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.338969 kubelet[2926]: W0124 00:55:15.338953 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.341840 kubelet[2926]: E0124 00:55:15.340670 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.344791 kubelet[2926]: E0124 00:55:15.344769 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.344894 kubelet[2926]: W0124 00:55:15.344876 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.345733 kubelet[2926]: E0124 00:55:15.345711 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.345884 kubelet[2926]: E0124 00:55:15.345869 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.345958 kubelet[2926]: W0124 00:55:15.345943 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.347961 kubelet[2926]: E0124 00:55:15.347844 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.349502 kubelet[2926]: E0124 00:55:15.348546 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.349502 kubelet[2926]: W0124 00:55:15.348561 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.349502 kubelet[2926]: E0124 00:55:15.348634 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.357459 kubelet[2926]: E0124 00:55:15.355998 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.357459 kubelet[2926]: W0124 00:55:15.356102 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.357578 kubelet[2926]: E0124 00:55:15.357549 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.368909 kubelet[2926]: E0124 00:55:15.361362 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.368909 kubelet[2926]: W0124 00:55:15.361556 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.368909 kubelet[2926]: E0124 00:55:15.361791 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.368909 kubelet[2926]: E0124 00:55:15.361888 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.368909 kubelet[2926]: W0124 00:55:15.361897 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.368909 kubelet[2926]: E0124 00:55:15.362205 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.368909 kubelet[2926]: E0124 00:55:15.362704 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.368909 kubelet[2926]: W0124 00:55:15.362715 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.368909 kubelet[2926]: E0124 00:55:15.362775 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.368909 kubelet[2926]: E0124 00:55:15.368665 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.369560 kubelet[2926]: W0124 00:55:15.368677 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.374656 kubelet[2926]: E0124 00:55:15.374523 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.382026 kubelet[2926]: E0124 00:55:15.379050 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.382026 kubelet[2926]: W0124 00:55:15.379162 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.382026 kubelet[2926]: E0124 00:55:15.379554 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.404892 kubelet[2926]: E0124 00:55:15.401219 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.404892 kubelet[2926]: W0124 00:55:15.401634 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.404892 kubelet[2926]: E0124 00:55:15.401706 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.415853 kubelet[2926]: E0124 00:55:15.415227 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.434736 kubelet[2926]: W0124 00:55:15.433564 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.439497 kubelet[2926]: E0124 00:55:15.437081 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.450902 kubelet[2926]: E0124 00:55:15.448973 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.450902 kubelet[2926]: W0124 00:55:15.449098 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.450902 kubelet[2926]: E0124 00:55:15.450894 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.450902 kubelet[2926]: W0124 00:55:15.450912 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.451766 kubelet[2926]: E0124 00:55:15.450972 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.453020 kubelet[2926]: E0124 00:55:15.452698 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.453114 kubelet[2926]: E0124 00:55:15.453064 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.453114 kubelet[2926]: W0124 00:55:15.453081 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.454672 kubelet[2926]: E0124 00:55:15.454091 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.459743 kubelet[2926]: E0124 00:55:15.459054 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.460958 kubelet[2926]: W0124 00:55:15.460829 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.461836 kubelet[2926]: E0124 00:55:15.461769 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.462851 kubelet[2926]: E0124 00:55:15.462088 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.462851 kubelet[2926]: W0124 00:55:15.462193 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.462851 kubelet[2926]: E0124 00:55:15.462212 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.537475 kubelet[2926]: E0124 00:55:15.536588 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:15.557125 containerd[1614]: time="2026-01-24T00:55:15.555763567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6cvqh,Uid:23ef921d-4c70-49e8-a595-d8305d10d0a4,Namespace:calico-system,Attempt:0,}" Jan 24 00:55:15.586757 kubelet[2926]: E0124 00:55:15.581759 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:15.587452 kubelet[2926]: E0124 00:55:15.587140 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.587452 kubelet[2926]: W0124 00:55:15.587169 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.587452 kubelet[2926]: E0124 00:55:15.587199 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.591601 kubelet[2926]: E0124 00:55:15.591184 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.591601 kubelet[2926]: W0124 00:55:15.591561 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.591601 kubelet[2926]: E0124 00:55:15.591593 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.600843 kubelet[2926]: E0124 00:55:15.600741 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.600843 kubelet[2926]: W0124 00:55:15.600839 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.600966 kubelet[2926]: E0124 00:55:15.600866 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.604243 kubelet[2926]: E0124 00:55:15.604128 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.604243 kubelet[2926]: W0124 00:55:15.604218 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.606655 kubelet[2926]: E0124 00:55:15.604239 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.610753 systemd[1]: Started cri-containerd-cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4.scope - libcontainer container cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4. Jan 24 00:55:15.616144 kubelet[2926]: E0124 00:55:15.615854 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.616144 kubelet[2926]: W0124 00:55:15.615955 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.616144 kubelet[2926]: E0124 00:55:15.615985 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.620921 kubelet[2926]: E0124 00:55:15.620765 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.620921 kubelet[2926]: W0124 00:55:15.620866 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.620921 kubelet[2926]: E0124 00:55:15.620891 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.634159 kubelet[2926]: E0124 00:55:15.634002 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.634159 kubelet[2926]: W0124 00:55:15.634033 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.634159 kubelet[2926]: E0124 00:55:15.634062 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.662508 kubelet[2926]: E0124 00:55:15.660792 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.662508 kubelet[2926]: W0124 00:55:15.660826 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.662508 kubelet[2926]: E0124 00:55:15.660853 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.669468 kubelet[2926]: E0124 00:55:15.667803 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.669468 kubelet[2926]: W0124 00:55:15.668084 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.669468 kubelet[2926]: E0124 00:55:15.668116 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.674831 kubelet[2926]: E0124 00:55:15.669954 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.675703 kubelet[2926]: W0124 00:55:15.675625 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.675703 kubelet[2926]: E0124 00:55:15.675655 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.681165 kubelet[2926]: E0124 00:55:15.680867 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.681165 kubelet[2926]: W0124 00:55:15.680888 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.681165 kubelet[2926]: E0124 00:55:15.680913 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.692135 kubelet[2926]: E0124 00:55:15.686983 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.692135 kubelet[2926]: W0124 00:55:15.687085 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.692135 kubelet[2926]: E0124 00:55:15.687108 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.695899 kubelet[2926]: E0124 00:55:15.695788 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.695899 kubelet[2926]: W0124 00:55:15.695893 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.695985 kubelet[2926]: E0124 00:55:15.695922 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.700632 kubelet[2926]: E0124 00:55:15.699864 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.700632 kubelet[2926]: W0124 00:55:15.699967 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.700632 kubelet[2926]: E0124 00:55:15.699991 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.705216 kubelet[2926]: E0124 00:55:15.704929 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.705216 kubelet[2926]: W0124 00:55:15.705024 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.705216 kubelet[2926]: E0124 00:55:15.705046 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.714550 kubelet[2926]: E0124 00:55:15.714491 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.714550 kubelet[2926]: W0124 00:55:15.714519 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.714550 kubelet[2926]: E0124 00:55:15.714538 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.717823 kubelet[2926]: E0124 00:55:15.717707 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.717823 kubelet[2926]: W0124 00:55:15.717731 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.717823 kubelet[2926]: E0124 00:55:15.717750 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.717823 kubelet[2926]: I0124 00:55:15.717780 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1351988d-2da1-448e-bfda-fb7490691684-registration-dir\") pod \"csi-node-driver-htrd2\" (UID: \"1351988d-2da1-448e-bfda-fb7490691684\") " pod="calico-system/csi-node-driver-htrd2" Jan 24 00:55:15.719864 kubelet[2926]: E0124 00:55:15.719737 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.719864 kubelet[2926]: W0124 00:55:15.719837 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.727509 kubelet[2926]: E0124 00:55:15.726141 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.727509 kubelet[2926]: I0124 00:55:15.726496 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1351988d-2da1-448e-bfda-fb7490691684-varrun\") pod \"csi-node-driver-htrd2\" (UID: \"1351988d-2da1-448e-bfda-fb7490691684\") " pod="calico-system/csi-node-driver-htrd2" Jan 24 00:55:15.728580 kubelet[2926]: E0124 00:55:15.727948 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.728580 kubelet[2926]: W0124 00:55:15.727968 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.731863 kubelet[2926]: E0124 00:55:15.730481 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.732127 kubelet[2926]: E0124 00:55:15.732112 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.732189 kubelet[2926]: W0124 00:55:15.732176 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.732517 kubelet[2926]: E0124 00:55:15.732491 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.733132 kubelet[2926]: E0124 00:55:15.733114 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.733218 kubelet[2926]: W0124 00:55:15.733202 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.734020 kubelet[2926]: E0124 00:55:15.733999 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.734135 kubelet[2926]: I0124 00:55:15.734115 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1351988d-2da1-448e-bfda-fb7490691684-kubelet-dir\") pod \"csi-node-driver-htrd2\" (UID: \"1351988d-2da1-448e-bfda-fb7490691684\") " pod="calico-system/csi-node-driver-htrd2" Jan 24 00:55:15.734866 kubelet[2926]: E0124 00:55:15.734849 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.734936 kubelet[2926]: W0124 00:55:15.734923 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.735964 kubelet[2926]: E0124 00:55:15.735943 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.737842 kubelet[2926]: E0124 00:55:15.737824 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.737934 kubelet[2926]: W0124 00:55:15.737919 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.740466 kubelet[2926]: E0124 00:55:15.740229 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.741087 kubelet[2926]: E0124 00:55:15.741069 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.741161 kubelet[2926]: W0124 00:55:15.741148 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.741229 kubelet[2926]: E0124 00:55:15.741216 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.743715 kubelet[2926]: E0124 00:55:15.743685 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.743715 kubelet[2926]: W0124 00:55:15.743699 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.745010 kubelet[2926]: E0124 00:55:15.744729 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.759862 kubelet[2926]: E0124 00:55:15.759723 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.759862 kubelet[2926]: W0124 00:55:15.759836 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.761463 kubelet[2926]: E0124 00:55:15.760629 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.761876 kubelet[2926]: E0124 00:55:15.761727 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.761876 kubelet[2926]: W0124 00:55:15.761842 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.765667 kubelet[2926]: E0124 00:55:15.762221 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.765667 kubelet[2926]: E0124 00:55:15.763030 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.765667 kubelet[2926]: W0124 00:55:15.763517 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.765667 kubelet[2926]: E0124 00:55:15.763538 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.765667 kubelet[2926]: E0124 00:55:15.764166 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.765667 kubelet[2926]: W0124 00:55:15.764182 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.765667 kubelet[2926]: E0124 00:55:15.764198 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.765667 kubelet[2926]: E0124 00:55:15.764888 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.765667 kubelet[2926]: W0124 00:55:15.764903 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.765667 kubelet[2926]: E0124 00:55:15.764920 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.766092 kubelet[2926]: I0124 00:55:15.765934 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1351988d-2da1-448e-bfda-fb7490691684-socket-dir\") pod \"csi-node-driver-htrd2\" (UID: \"1351988d-2da1-448e-bfda-fb7490691684\") " pod="calico-system/csi-node-driver-htrd2" Jan 24 00:55:15.766124 kubelet[2926]: E0124 00:55:15.766099 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.766124 kubelet[2926]: W0124 00:55:15.766114 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.766168 kubelet[2926]: E0124 00:55:15.766129 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.767857 kubelet[2926]: E0124 00:55:15.767705 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.767857 kubelet[2926]: W0124 00:55:15.767729 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.767857 kubelet[2926]: E0124 00:55:15.767746 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.813041 containerd[1614]: time="2026-01-24T00:55:15.809005911Z" level=info msg="connecting to shim a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4" address="unix:///run/containerd/s/335d714dfd8d79ad2ce46d72ca6943036b1771048c247c84067259bdd6a46dc0" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:55:15.877569 kubelet[2926]: E0124 00:55:15.876185 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.877569 kubelet[2926]: W0124 00:55:15.876219 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.877569 kubelet[2926]: E0124 00:55:15.876689 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.893795 kubelet[2926]: E0124 00:55:15.890217 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.895135 kubelet[2926]: W0124 00:55:15.890240 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.895574 kubelet[2926]: E0124 00:55:15.895548 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.899748 kubelet[2926]: E0124 00:55:15.897229 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.902224 kubelet[2926]: W0124 00:55:15.902197 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.904503 kubelet[2926]: E0124 00:55:15.904369 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.904583 kubelet[2926]: I0124 00:55:15.904525 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5zq6\" (UniqueName: \"kubernetes.io/projected/1351988d-2da1-448e-bfda-fb7490691684-kube-api-access-l5zq6\") pod \"csi-node-driver-htrd2\" (UID: \"1351988d-2da1-448e-bfda-fb7490691684\") " pod="calico-system/csi-node-driver-htrd2" Jan 24 00:55:15.907583 kubelet[2926]: E0124 00:55:15.907564 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.911896 kubelet[2926]: W0124 00:55:15.911873 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.913377 kubelet[2926]: E0124 00:55:15.913358 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.919629 kubelet[2926]: E0124 00:55:15.919225 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.919839 kubelet[2926]: W0124 00:55:15.919714 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.921693 kubelet[2926]: E0124 00:55:15.920163 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.923642 kubelet[2926]: E0124 00:55:15.923622 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.926050 kubelet[2926]: W0124 00:55:15.925831 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.927629 kubelet[2926]: E0124 00:55:15.926704 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.938667 kubelet[2926]: E0124 00:55:15.933794 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.938667 kubelet[2926]: W0124 00:55:15.933910 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.938667 kubelet[2926]: E0124 00:55:15.936623 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.938667 kubelet[2926]: E0124 00:55:15.937703 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.938667 kubelet[2926]: W0124 00:55:15.937725 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.938667 kubelet[2926]: E0124 00:55:15.937808 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.944706 kubelet[2926]: E0124 00:55:15.940048 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.944706 kubelet[2926]: W0124 00:55:15.940149 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.944706 kubelet[2926]: E0124 00:55:15.940998 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.950995 kubelet[2926]: E0124 00:55:15.946694 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.950995 kubelet[2926]: W0124 00:55:15.946712 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.952724 kubelet[2926]: E0124 00:55:15.952616 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.956683 kubelet[2926]: E0124 00:55:15.954001 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.956683 kubelet[2926]: W0124 00:55:15.954021 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.956683 kubelet[2926]: E0124 00:55:15.954097 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.956683 kubelet[2926]: E0124 00:55:15.954982 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.956683 kubelet[2926]: W0124 00:55:15.954995 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.959499 kubelet[2926]: E0124 00:55:15.958102 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.959499 kubelet[2926]: E0124 00:55:15.958921 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.959499 kubelet[2926]: W0124 00:55:15.958935 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.959893 kubelet[2926]: E0124 00:55:15.959498 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.962081 kubelet[2926]: E0124 00:55:15.960499 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.962081 kubelet[2926]: W0124 00:55:15.960520 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.962081 kubelet[2926]: E0124 00:55:15.960594 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.963985 kubelet[2926]: E0124 00:55:15.963614 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.963985 kubelet[2926]: W0124 00:55:15.963712 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.964830 kubelet[2926]: E0124 00:55:15.964633 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.965841 kubelet[2926]: E0124 00:55:15.965740 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.965915 kubelet[2926]: W0124 00:55:15.965842 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.982203 kubelet[2926]: E0124 00:55:15.980803 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.982203 kubelet[2926]: E0124 00:55:15.980880 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.982203 kubelet[2926]: W0124 00:55:15.980892 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.982000 audit[3504]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:15.986642 kubelet[2926]: E0124 00:55:15.981240 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.986845 kubelet[2926]: E0124 00:55:15.986715 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.986845 kubelet[2926]: W0124 00:55:15.986731 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.982000 audit[3504]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd318c6ed0 a2=0 a3=7ffd318c6ebc items=0 ppid=3030 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:15.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:15.991768 kubelet[2926]: E0124 00:55:15.987988 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.991768 kubelet[2926]: E0124 00:55:15.988692 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.991768 kubelet[2926]: W0124 00:55:15.988706 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.991768 kubelet[2926]: E0124 00:55:15.989502 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.991768 kubelet[2926]: E0124 00:55:15.990058 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.991768 kubelet[2926]: W0124 00:55:15.990072 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.991768 kubelet[2926]: E0124 00:55:15.990088 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.991768 kubelet[2926]: E0124 00:55:15.991148 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.991768 kubelet[2926]: W0124 00:55:15.991160 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.996231 kubelet[2926]: E0124 00:55:15.992770 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.996231 kubelet[2926]: E0124 00:55:15.992893 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.996231 kubelet[2926]: W0124 00:55:15.992904 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.996231 kubelet[2926]: E0124 00:55:15.992917 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:15.996231 kubelet[2926]: E0124 00:55:15.995120 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:15.996231 kubelet[2926]: W0124 00:55:15.995132 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:15.996231 kubelet[2926]: E0124 00:55:15.995145 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:16.003000 audit: BPF prog-id=149 op=LOAD Jan 24 00:55:16.004000 audit: BPF prog-id=150 op=LOAD Jan 24 00:55:16.004000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3366 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362383366303632636434613137626432336330313339313530626462 Jan 24 00:55:16.005000 audit: BPF prog-id=150 op=UNLOAD Jan 24 00:55:16.005000 audit[3398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362383366303632636434613137626432336330313339313530626462 Jan 24 00:55:16.006000 audit: BPF prog-id=151 op=LOAD Jan 24 00:55:16.006000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3366 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362383366303632636434613137626432336330313339313530626462 Jan 24 00:55:16.006000 audit: BPF prog-id=152 op=LOAD Jan 24 00:55:16.006000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3366 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362383366303632636434613137626432336330313339313530626462 Jan 24 00:55:16.007000 audit: BPF prog-id=152 op=UNLOAD Jan 24 00:55:16.007000 audit[3398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362383366303632636434613137626432336330313339313530626462 Jan 24 00:55:16.007000 audit: BPF prog-id=151 op=UNLOAD Jan 24 00:55:16.007000 audit[3398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362383366303632636434613137626432336330313339313530626462 Jan 24 00:55:16.007000 audit: BPF prog-id=153 op=LOAD Jan 24 00:55:16.007000 audit[3398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3366 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.012000 audit[3504]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:16.012000 audit[3504]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd318c6ed0 a2=0 a3=0 items=0 ppid=3030 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:16.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362383366303632636434613137626432336330313339313530626462 Jan 24 00:55:16.070937 kubelet[2926]: E0124 00:55:16.068527 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:16.070937 kubelet[2926]: W0124 00:55:16.068563 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:16.070937 kubelet[2926]: E0124 00:55:16.068590 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:16.070937 kubelet[2926]: E0124 00:55:16.069776 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:16.070937 kubelet[2926]: W0124 00:55:16.069792 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:16.070937 kubelet[2926]: E0124 00:55:16.069812 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:16.081077 kubelet[2926]: E0124 00:55:16.081022 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:16.081077 kubelet[2926]: W0124 00:55:16.081046 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:16.081077 kubelet[2926]: E0124 00:55:16.081069 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:16.085869 kubelet[2926]: E0124 00:55:16.083983 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:16.085869 kubelet[2926]: W0124 00:55:16.084092 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:16.085869 kubelet[2926]: E0124 00:55:16.084119 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:16.085869 kubelet[2926]: E0124 00:55:16.085181 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:16.085869 kubelet[2926]: W0124 00:55:16.085197 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:16.085869 kubelet[2926]: E0124 00:55:16.085216 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:16.116896 systemd[1]: Started cri-containerd-a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4.scope - libcontainer container a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4. Jan 24 00:55:16.168036 kubelet[2926]: E0124 00:55:16.165000 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:16.168036 kubelet[2926]: W0124 00:55:16.165119 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:16.168036 kubelet[2926]: E0124 00:55:16.165149 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:16.220000 audit: BPF prog-id=154 op=LOAD Jan 24 00:55:16.221000 audit: BPF prog-id=155 op=LOAD Jan 24 00:55:16.221000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001bc238 a2=98 a3=0 items=0 ppid=3473 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363163303135373837333934383131613134616363353433346536 Jan 24 00:55:16.222000 audit: BPF prog-id=155 op=UNLOAD Jan 24 00:55:16.222000 audit[3491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363163303135373837333934383131613134616363353433346536 Jan 24 00:55:16.222000 audit: BPF prog-id=156 op=LOAD Jan 24 00:55:16.222000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001bc488 a2=98 a3=0 items=0 ppid=3473 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363163303135373837333934383131613134616363353433346536 Jan 24 00:55:16.222000 audit: BPF prog-id=157 op=LOAD Jan 24 00:55:16.222000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001bc218 a2=98 a3=0 items=0 ppid=3473 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363163303135373837333934383131613134616363353433346536 Jan 24 00:55:16.222000 audit: BPF prog-id=157 op=UNLOAD Jan 24 00:55:16.222000 audit[3491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363163303135373837333934383131613134616363353433346536 Jan 24 00:55:16.222000 audit: BPF prog-id=156 op=UNLOAD Jan 24 00:55:16.222000 audit[3491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363163303135373837333934383131613134616363353433346536 Jan 24 00:55:16.222000 audit: BPF prog-id=158 op=LOAD Jan 24 00:55:16.222000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001bc6e8 a2=98 a3=0 items=0 ppid=3473 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:16.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363163303135373837333934383131613134616363353433346536 Jan 24 00:55:16.340702 containerd[1614]: time="2026-01-24T00:55:16.340229281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8bfc665b-tb4px,Uid:c68d4d18-aae8-46b6-971b-e0ce5b3573ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4\"" Jan 24 00:55:16.345993 kubelet[2926]: E0124 00:55:16.343855 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:16.349748 containerd[1614]: time="2026-01-24T00:55:16.347187021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:55:16.389810 containerd[1614]: time="2026-01-24T00:55:16.389668042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6cvqh,Uid:23ef921d-4c70-49e8-a595-d8305d10d0a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4\"" Jan 24 00:55:16.394196 kubelet[2926]: E0124 00:55:16.393474 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:16.810984 kubelet[2926]: E0124 00:55:16.810169 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:17.613699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358097951.mount: Deactivated successfully. Jan 24 00:55:18.813937 kubelet[2926]: E0124 00:55:18.811940 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:20.807941 kubelet[2926]: E0124 00:55:20.806226 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:22.806003 kubelet[2926]: E0124 00:55:22.805840 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:24.806729 kubelet[2926]: E0124 00:55:24.806599 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:24.936803 containerd[1614]: time="2026-01-24T00:55:24.936454326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:24.989047 containerd[1614]: time="2026-01-24T00:55:24.944831245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 24 00:55:24.989047 containerd[1614]: time="2026-01-24T00:55:24.978872293Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:24.993663 containerd[1614]: time="2026-01-24T00:55:24.993450885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:24.997760 containerd[1614]: time="2026-01-24T00:55:24.996440140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 8.649212443s" Jan 24 00:55:24.997760 containerd[1614]: time="2026-01-24T00:55:24.996566536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:55:25.006600 containerd[1614]: time="2026-01-24T00:55:25.006561923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:55:25.181627 containerd[1614]: time="2026-01-24T00:55:25.181208544Z" level=info msg="CreateContainer within sandbox \"cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:55:25.262509 containerd[1614]: time="2026-01-24T00:55:25.262067973Z" level=info msg="Container 234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:55:25.315703 containerd[1614]: time="2026-01-24T00:55:25.315635065Z" level=info msg="CreateContainer within sandbox \"cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656\"" Jan 24 00:55:25.356614 containerd[1614]: time="2026-01-24T00:55:25.355886872Z" level=info msg="StartContainer for \"234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656\"" Jan 24 00:55:25.361698 containerd[1614]: time="2026-01-24T00:55:25.361489669Z" level=info msg="connecting to shim 234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656" address="unix:///run/containerd/s/a927722b3d69e2ccd80eb8f823df61cfe0472c38db31feaacbdd14688b934e85" protocol=ttrpc version=3 Jan 24 00:55:25.534041 systemd[1]: Started cri-containerd-234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656.scope - libcontainer container 234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656. Jan 24 00:55:25.699544 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 24 00:55:25.699701 kernel: audit: type=1334 audit(1769216125.681:554): prog-id=159 op=LOAD Jan 24 00:55:25.681000 audit: BPF prog-id=159 op=LOAD Jan 24 00:55:25.699000 audit: BPF prog-id=160 op=LOAD Jan 24 00:55:25.716874 kernel: audit: type=1334 audit(1769216125.699:555): prog-id=160 op=LOAD Jan 24 00:55:25.699000 audit[3562]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.769511 kernel: audit: type=1300 audit(1769216125.699:555): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.855425 kernel: audit: type=1327 audit(1769216125.699:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.869677 kernel: audit: type=1334 audit(1769216125.699:556): prog-id=160 op=UNLOAD Jan 24 00:55:25.869753 kernel: audit: type=1300 audit(1769216125.699:556): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.699000 audit: BPF prog-id=160 op=UNLOAD Jan 24 00:55:25.699000 audit[3562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.937826 kernel: audit: type=1327 audit(1769216125.699:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.938134 kernel: audit: type=1334 audit(1769216125.708:557): prog-id=161 op=LOAD Jan 24 00:55:25.708000 audit: BPF prog-id=161 op=LOAD Jan 24 00:55:25.708000 audit[3562]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.998468 kernel: audit: type=1300 audit(1769216125.708:557): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.999982 kernel: audit: type=1327 audit(1769216125.708:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.708000 audit: BPF prog-id=162 op=LOAD Jan 24 00:55:25.708000 audit[3562]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.708000 audit: BPF prog-id=162 op=UNLOAD Jan 24 00:55:25.708000 audit[3562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.708000 audit: BPF prog-id=161 op=UNLOAD Jan 24 00:55:25.708000 audit[3562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:25.708000 audit: BPF prog-id=163 op=LOAD Jan 24 00:55:25.708000 audit[3562]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3366 pid=3562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:25.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233346235383861343137393134623461353361313035363534363739 Jan 24 00:55:26.189177 containerd[1614]: time="2026-01-24T00:55:26.189101730Z" level=info msg="StartContainer for \"234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656\" returns successfully" Jan 24 00:55:26.422207 kubelet[2926]: E0124 00:55:26.422076 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:26.501139 kubelet[2926]: E0124 00:55:26.499751 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.501139 kubelet[2926]: W0124 00:55:26.499787 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.526665 kubelet[2926]: E0124 00:55:26.519222 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.547207 kubelet[2926]: E0124 00:55:26.546896 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.547207 kubelet[2926]: W0124 00:55:26.547027 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.547207 kubelet[2926]: E0124 00:55:26.547055 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.549064 kubelet[2926]: E0124 00:55:26.549043 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.550730 kubelet[2926]: W0124 00:55:26.549734 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.550730 kubelet[2926]: E0124 00:55:26.549768 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.551483 kubelet[2926]: E0124 00:55:26.551462 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.551592 kubelet[2926]: W0124 00:55:26.551570 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.551686 kubelet[2926]: E0124 00:55:26.551667 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.558633 kubelet[2926]: E0124 00:55:26.558095 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.558633 kubelet[2926]: W0124 00:55:26.558129 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.558633 kubelet[2926]: E0124 00:55:26.558154 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.559181 kubelet[2926]: E0124 00:55:26.558849 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.559181 kubelet[2926]: W0124 00:55:26.558866 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.559181 kubelet[2926]: E0124 00:55:26.558885 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.563004 kubelet[2926]: E0124 00:55:26.562881 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.563114 kubelet[2926]: W0124 00:55:26.563095 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.563207 kubelet[2926]: E0124 00:55:26.563188 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.565214 kubelet[2926]: E0124 00:55:26.565195 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.566871 kubelet[2926]: W0124 00:55:26.566128 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.566871 kubelet[2926]: E0124 00:55:26.566154 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.568597 kubelet[2926]: E0124 00:55:26.568581 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.568698 kubelet[2926]: W0124 00:55:26.568685 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.569508 kubelet[2926]: E0124 00:55:26.569118 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.571495 kubelet[2926]: E0124 00:55:26.571478 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.571577 kubelet[2926]: W0124 00:55:26.571564 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.571636 kubelet[2926]: E0124 00:55:26.571625 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.572873 kubelet[2926]: E0124 00:55:26.572858 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.573539 kubelet[2926]: W0124 00:55:26.573146 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.573539 kubelet[2926]: E0124 00:55:26.573167 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.574447 kubelet[2926]: E0124 00:55:26.574424 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.574839 kubelet[2926]: W0124 00:55:26.574821 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.574996 kubelet[2926]: E0124 00:55:26.574897 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.601528 kubelet[2926]: E0124 00:55:26.600176 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.601528 kubelet[2926]: W0124 00:55:26.600523 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.601528 kubelet[2926]: E0124 00:55:26.600559 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.607743 kubelet[2926]: E0124 00:55:26.607528 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.625661 kubelet[2926]: W0124 00:55:26.623160 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.639498 kubelet[2926]: E0124 00:55:26.635430 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.646520 kubelet[2926]: E0124 00:55:26.645733 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.652674 kubelet[2926]: W0124 00:55:26.652152 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.655070 kubelet[2926]: E0124 00:55:26.655043 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.658784 kubelet[2926]: E0124 00:55:26.658761 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.659645 kubelet[2926]: W0124 00:55:26.659025 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.659645 kubelet[2926]: E0124 00:55:26.659062 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.663491 kubelet[2926]: E0124 00:55:26.662162 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.665523 kubelet[2926]: W0124 00:55:26.664425 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.667716 kubelet[2926]: E0124 00:55:26.667356 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.672099 kubelet[2926]: I0124 00:55:26.665107 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c8bfc665b-tb4px" podStartSLOduration=4.003716812 podStartE2EDuration="12.665088088s" podCreationTimestamp="2026-01-24 00:55:14 +0000 UTC" firstStartedPulling="2026-01-24 00:55:16.345042018 +0000 UTC m=+57.924271752" lastFinishedPulling="2026-01-24 00:55:25.006413275 +0000 UTC m=+66.585643028" observedRunningTime="2026-01-24 00:55:26.660497493 +0000 UTC m=+68.239727236" watchObservedRunningTime="2026-01-24 00:55:26.665088088 +0000 UTC m=+68.244317821" Jan 24 00:55:26.672985 kubelet[2926]: E0124 00:55:26.672859 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.674758 kubelet[2926]: W0124 00:55:26.673065 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.674758 kubelet[2926]: E0124 00:55:26.673091 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.677782 kubelet[2926]: E0124 00:55:26.677228 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.677994 kubelet[2926]: W0124 00:55:26.677871 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.678108 kubelet[2926]: E0124 00:55:26.678088 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.679780 kubelet[2926]: E0124 00:55:26.679762 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.679879 kubelet[2926]: W0124 00:55:26.679863 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.680087 kubelet[2926]: E0124 00:55:26.680069 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.681432 kubelet[2926]: E0124 00:55:26.681217 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.681432 kubelet[2926]: W0124 00:55:26.681235 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.682111 kubelet[2926]: E0124 00:55:26.682087 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.685208 kubelet[2926]: E0124 00:55:26.685184 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.686083 kubelet[2926]: W0124 00:55:26.685695 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.686083 kubelet[2926]: E0124 00:55:26.685720 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.702192 kubelet[2926]: E0124 00:55:26.701226 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.702192 kubelet[2926]: W0124 00:55:26.701487 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.702192 kubelet[2926]: E0124 00:55:26.701521 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.706876 kubelet[2926]: E0124 00:55:26.706846 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.712749 kubelet[2926]: W0124 00:55:26.712499 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.718727 kubelet[2926]: E0124 00:55:26.713395 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.725146 kubelet[2926]: E0124 00:55:26.719133 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.725146 kubelet[2926]: W0124 00:55:26.719152 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.725146 kubelet[2926]: E0124 00:55:26.720210 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.731497 kubelet[2926]: E0124 00:55:26.731436 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.731497 kubelet[2926]: W0124 00:55:26.731465 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.735604 kubelet[2926]: E0124 00:55:26.735538 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.763620 kubelet[2926]: E0124 00:55:26.761226 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.763620 kubelet[2926]: W0124 00:55:26.761431 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.763620 kubelet[2926]: E0124 00:55:26.761469 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.774196 kubelet[2926]: E0124 00:55:26.770539 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.774196 kubelet[2926]: W0124 00:55:26.770650 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.774196 kubelet[2926]: E0124 00:55:26.770977 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.783208 kubelet[2926]: E0124 00:55:26.782161 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.783208 kubelet[2926]: W0124 00:55:26.782574 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.783208 kubelet[2926]: E0124 00:55:26.782614 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.784805 kubelet[2926]: E0124 00:55:26.784106 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.784805 kubelet[2926]: W0124 00:55:26.784208 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.784805 kubelet[2926]: E0124 00:55:26.784500 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.798685 kubelet[2926]: E0124 00:55:26.791518 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.798685 kubelet[2926]: W0124 00:55:26.791628 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.798685 kubelet[2926]: E0124 00:55:26.791659 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.798685 kubelet[2926]: E0124 00:55:26.799650 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.798685 kubelet[2926]: W0124 00:55:26.799666 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.798685 kubelet[2926]: E0124 00:55:26.799685 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.814715 kubelet[2926]: E0124 00:55:26.803647 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:26.814715 kubelet[2926]: W0124 00:55:26.803661 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:26.814715 kubelet[2926]: E0124 00:55:26.803681 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:26.814715 kubelet[2926]: E0124 00:55:26.805480 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:27.019231 containerd[1614]: time="2026-01-24T00:55:27.018836286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:27.028609 containerd[1614]: time="2026-01-24T00:55:27.028543697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 24 00:55:27.040229 containerd[1614]: time="2026-01-24T00:55:27.039590689Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:27.059172 containerd[1614]: time="2026-01-24T00:55:27.058804318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:27.062644 containerd[1614]: time="2026-01-24T00:55:27.062048823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.055301493s" Jan 24 00:55:27.062644 containerd[1614]: time="2026-01-24T00:55:27.062180250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:55:27.114943 containerd[1614]: time="2026-01-24T00:55:27.114772709Z" level=info msg="CreateContainer within sandbox \"a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:55:27.221770 containerd[1614]: time="2026-01-24T00:55:27.214778169Z" level=info msg="Container 96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:55:27.350667 containerd[1614]: time="2026-01-24T00:55:27.349777282Z" level=info msg="CreateContainer within sandbox \"a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef\"" Jan 24 00:55:27.358460 containerd[1614]: time="2026-01-24T00:55:27.356642423Z" level=info msg="StartContainer for \"96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef\"" Jan 24 00:55:27.406981 containerd[1614]: time="2026-01-24T00:55:27.406894679Z" level=info msg="connecting to shim 96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef" address="unix:///run/containerd/s/335d714dfd8d79ad2ce46d72ca6943036b1771048c247c84067259bdd6a46dc0" protocol=ttrpc version=3 Jan 24 00:55:27.465497 kubelet[2926]: E0124 00:55:27.464210 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:27.528918 kubelet[2926]: E0124 00:55:27.528868 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.531073 kubelet[2926]: W0124 00:55:27.530507 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.531073 kubelet[2926]: E0124 00:55:27.530713 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.532643 kubelet[2926]: E0124 00:55:27.532471 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.532643 kubelet[2926]: W0124 00:55:27.532493 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.532643 kubelet[2926]: E0124 00:55:27.532518 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.552126 kubelet[2926]: E0124 00:55:27.550734 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.552126 kubelet[2926]: W0124 00:55:27.550875 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.552126 kubelet[2926]: E0124 00:55:27.550912 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.553534 kubelet[2926]: E0124 00:55:27.553508 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.553534 kubelet[2926]: W0124 00:55:27.553521 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.553626 kubelet[2926]: E0124 00:55:27.553540 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.555992 kubelet[2926]: E0124 00:55:27.554966 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.555992 kubelet[2926]: W0124 00:55:27.554983 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.555992 kubelet[2926]: E0124 00:55:27.554999 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.558962 kubelet[2926]: E0124 00:55:27.558944 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.562636 kubelet[2926]: W0124 00:55:27.559910 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.562636 kubelet[2926]: E0124 00:55:27.559934 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.577015 kubelet[2926]: E0124 00:55:27.576982 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.577167 kubelet[2926]: W0124 00:55:27.577146 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.577868 kubelet[2926]: E0124 00:55:27.577743 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.585871 kubelet[2926]: E0124 00:55:27.585517 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.585871 kubelet[2926]: W0124 00:55:27.585543 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.585871 kubelet[2926]: E0124 00:55:27.585569 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.587630 kubelet[2926]: E0124 00:55:27.587605 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.589226 kubelet[2926]: W0124 00:55:27.589203 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.589973 kubelet[2926]: E0124 00:55:27.589592 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.592458 kubelet[2926]: E0124 00:55:27.592433 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.592638 kubelet[2926]: W0124 00:55:27.592618 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.592748 kubelet[2926]: E0124 00:55:27.592728 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.593456 kubelet[2926]: E0124 00:55:27.593439 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.593548 kubelet[2926]: W0124 00:55:27.593533 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.593615 kubelet[2926]: E0124 00:55:27.593601 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.598871 kubelet[2926]: E0124 00:55:27.598596 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.598871 kubelet[2926]: W0124 00:55:27.598619 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.598871 kubelet[2926]: E0124 00:55:27.598636 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.613236 kubelet[2926]: E0124 00:55:27.605222 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.613236 kubelet[2926]: W0124 00:55:27.605511 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.613236 kubelet[2926]: E0124 00:55:27.605537 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.620237 kubelet[2926]: E0124 00:55:27.619736 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.620237 kubelet[2926]: W0124 00:55:27.619923 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.620237 kubelet[2926]: E0124 00:55:27.619952 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.624089 kubelet[2926]: E0124 00:55:27.624067 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.624184 kubelet[2926]: W0124 00:55:27.624166 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.624460 kubelet[2926]: E0124 00:55:27.624442 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.628437 kubelet[2926]: E0124 00:55:27.628419 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.628870 kubelet[2926]: W0124 00:55:27.628851 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.629209 kubelet[2926]: E0124 00:55:27.629192 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.634207 kubelet[2926]: E0124 00:55:27.633899 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.636068 kubelet[2926]: W0124 00:55:27.635713 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.636185 kubelet[2926]: E0124 00:55:27.636165 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.637507 kubelet[2926]: E0124 00:55:27.637403 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.637507 kubelet[2926]: W0124 00:55:27.637425 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.655852 kubelet[2926]: E0124 00:55:27.655525 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.660402 kubelet[2926]: E0124 00:55:27.659485 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.660402 kubelet[2926]: W0124 00:55:27.659516 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.660402 kubelet[2926]: E0124 00:55:27.660160 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.670663 kubelet[2926]: E0124 00:55:27.670471 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.670663 kubelet[2926]: W0124 00:55:27.670602 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.670663 kubelet[2926]: E0124 00:55:27.670650 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.675116 kubelet[2926]: E0124 00:55:27.674637 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.675116 kubelet[2926]: W0124 00:55:27.674998 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.678421 kubelet[2926]: E0124 00:55:27.677677 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.685145 kubelet[2926]: E0124 00:55:27.684716 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.685145 kubelet[2926]: W0124 00:55:27.684855 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.689927 kubelet[2926]: E0124 00:55:27.689192 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.698224 kubelet[2926]: E0124 00:55:27.694927 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.698224 kubelet[2926]: W0124 00:55:27.694945 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.698224 kubelet[2926]: E0124 00:55:27.695067 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.706189 kubelet[2926]: E0124 00:55:27.701665 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.706189 kubelet[2926]: W0124 00:55:27.701849 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.706189 kubelet[2926]: E0124 00:55:27.701964 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.708468 kubelet[2926]: E0124 00:55:27.707166 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.708468 kubelet[2926]: W0124 00:55:27.707509 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.711039 kubelet[2926]: E0124 00:55:27.710466 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.711082 systemd[1]: Started cri-containerd-96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef.scope - libcontainer container 96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef. Jan 24 00:55:27.716475 kubelet[2926]: E0124 00:55:27.713963 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.716475 kubelet[2926]: W0124 00:55:27.714061 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.717999 kubelet[2926]: E0124 00:55:27.717497 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.725235 kubelet[2926]: E0124 00:55:27.725064 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.734109 kubelet[2926]: W0124 00:55:27.731174 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.735659 kubelet[2926]: E0124 00:55:27.734638 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.737456 kubelet[2926]: E0124 00:55:27.736895 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.739166 kubelet[2926]: W0124 00:55:27.739141 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.743162 kubelet[2926]: E0124 00:55:27.739991 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.768044 kubelet[2926]: E0124 00:55:27.758593 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.768044 kubelet[2926]: W0124 00:55:27.758623 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.768044 kubelet[2926]: E0124 00:55:27.761079 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.768044 kubelet[2926]: E0124 00:55:27.764854 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.768044 kubelet[2926]: W0124 00:55:27.764884 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.768044 kubelet[2926]: E0124 00:55:27.766010 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.768044 kubelet[2926]: W0124 00:55:27.766029 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.768044 kubelet[2926]: E0124 00:55:27.766051 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.768044 kubelet[2926]: E0124 00:55:27.767469 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.772111 kubelet[2926]: E0124 00:55:27.768203 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.772111 kubelet[2926]: W0124 00:55:27.768217 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.772111 kubelet[2926]: E0124 00:55:27.768242 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.772111 kubelet[2926]: E0124 00:55:27.771432 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:55:27.772111 kubelet[2926]: W0124 00:55:27.771445 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:55:27.772111 kubelet[2926]: E0124 00:55:27.771460 2926 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:55:27.807000 audit[3689]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=3689 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:27.807000 audit[3689]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffec5ea35e0 a2=0 a3=7ffec5ea35cc items=0 ppid=3030 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:27.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:27.820000 audit[3689]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=3689 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:55:27.820000 audit[3689]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffec5ea35e0 a2=0 a3=7ffec5ea35cc items=0 ppid=3030 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:27.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:55:28.022000 audit: BPF prog-id=164 op=LOAD Jan 24 00:55:28.022000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3473 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:28.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936666133613365383238306436323034333535333530393531653961 Jan 24 00:55:28.022000 audit: BPF prog-id=165 op=LOAD Jan 24 00:55:28.022000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3473 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:28.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936666133613365383238306436323034333535333530393531653961 Jan 24 00:55:28.022000 audit: BPF prog-id=165 op=UNLOAD Jan 24 00:55:28.022000 audit[3638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:28.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936666133613365383238306436323034333535333530393531653961 Jan 24 00:55:28.022000 audit: BPF prog-id=164 op=UNLOAD Jan 24 00:55:28.022000 audit[3638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:28.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936666133613365383238306436323034333535333530393531653961 Jan 24 00:55:28.022000 audit: BPF prog-id=166 op=LOAD Jan 24 00:55:28.022000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3473 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:28.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936666133613365383238306436323034333535333530393531653961 Jan 24 00:55:28.276881 containerd[1614]: time="2026-01-24T00:55:28.275551245Z" level=info msg="StartContainer for \"96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef\" returns successfully" Jan 24 00:55:28.355412 systemd[1]: cri-containerd-96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef.scope: Deactivated successfully. Jan 24 00:55:28.364000 audit: BPF prog-id=166 op=UNLOAD Jan 24 00:55:28.400559 containerd[1614]: time="2026-01-24T00:55:28.400504370Z" level=info msg="received container exit event container_id:\"96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef\" id:\"96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef\" pid:3686 exited_at:{seconds:1769216128 nanos:389855995}" Jan 24 00:55:28.515905 kubelet[2926]: E0124 00:55:28.515867 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:28.521009 kubelet[2926]: E0124 00:55:28.520951 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:28.670238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef-rootfs.mount: Deactivated successfully. Jan 24 00:55:28.811893 kubelet[2926]: E0124 00:55:28.811079 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:29.595047 kubelet[2926]: E0124 00:55:29.595005 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:29.601379 kubelet[2926]: E0124 00:55:29.595674 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:29.630763 containerd[1614]: time="2026-01-24T00:55:29.630629662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:55:30.808036 kubelet[2926]: E0124 00:55:30.807796 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:32.805758 kubelet[2926]: E0124 00:55:32.805039 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:34.817538 kubelet[2926]: E0124 00:55:34.808151 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:36.806706 kubelet[2926]: E0124 00:55:36.806008 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:38.809623 kubelet[2926]: E0124 00:55:38.809585 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:38.810658 kubelet[2926]: E0124 00:55:38.810557 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:40.819698 kubelet[2926]: E0124 00:55:40.819614 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:42.809695 kubelet[2926]: E0124 00:55:42.805624 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:44.717017 containerd[1614]: time="2026-01-24T00:55:44.715115496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:44.728021 containerd[1614]: time="2026-01-24T00:55:44.724110157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 24 00:55:44.738039 containerd[1614]: time="2026-01-24T00:55:44.737994249Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:44.762972 containerd[1614]: time="2026-01-24T00:55:44.762897720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:44.765031 containerd[1614]: time="2026-01-24T00:55:44.764145099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 15.133337621s" Jan 24 00:55:44.765031 containerd[1614]: time="2026-01-24T00:55:44.764182165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:55:44.800194 containerd[1614]: time="2026-01-24T00:55:44.798057402Z" level=info msg="CreateContainer within sandbox \"a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:55:44.807238 kubelet[2926]: E0124 00:55:44.805857 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:44.876749 containerd[1614]: time="2026-01-24T00:55:44.876208320Z" level=info msg="Container 8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:55:44.881235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825567473.mount: Deactivated successfully. Jan 24 00:55:44.920968 containerd[1614]: time="2026-01-24T00:55:44.920740441Z" level=info msg="CreateContainer within sandbox \"a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24\"" Jan 24 00:55:44.928450 containerd[1614]: time="2026-01-24T00:55:44.928106750Z" level=info msg="StartContainer for \"8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24\"" Jan 24 00:55:44.940844 containerd[1614]: time="2026-01-24T00:55:44.940137085Z" level=info msg="connecting to shim 8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24" address="unix:///run/containerd/s/335d714dfd8d79ad2ce46d72ca6943036b1771048c247c84067259bdd6a46dc0" protocol=ttrpc version=3 Jan 24 00:55:45.164065 systemd[1]: Started cri-containerd-8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24.scope - libcontainer container 8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24. Jan 24 00:55:45.587000 audit: BPF prog-id=167 op=LOAD Jan 24 00:55:45.601859 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 24 00:55:45.602002 kernel: audit: type=1334 audit(1769216145.587:570): prog-id=167 op=LOAD Jan 24 00:55:45.587000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.669960 kernel: audit: type=1300 audit(1769216145.587:570): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.670098 kernel: audit: type=1327 audit(1769216145.587:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:45.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:45.587000 audit: BPF prog-id=168 op=LOAD Jan 24 00:55:45.717648 kernel: audit: type=1334 audit(1769216145.587:571): prog-id=168 op=LOAD Jan 24 00:55:45.587000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.781903 kernel: audit: type=1300 audit(1769216145.587:571): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.782058 kernel: audit: type=1327 audit(1769216145.587:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:45.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:45.587000 audit: BPF prog-id=168 op=UNLOAD Jan 24 00:55:45.587000 audit[3740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.883057 kernel: audit: type=1334 audit(1769216145.587:572): prog-id=168 op=UNLOAD Jan 24 00:55:45.883478 kernel: audit: type=1300 audit(1769216145.587:572): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.883521 kernel: audit: type=1327 audit(1769216145.587:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:45.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:45.587000 audit: BPF prog-id=167 op=UNLOAD Jan 24 00:55:45.979815 kernel: audit: type=1334 audit(1769216145.587:573): prog-id=167 op=UNLOAD Jan 24 00:55:45.587000 audit[3740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:45.587000 audit: BPF prog-id=169 op=LOAD Jan 24 00:55:45.587000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3473 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:55:45.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866656136633933623462666262623133316666383935626266376265 Jan 24 00:55:46.233643 containerd[1614]: time="2026-01-24T00:55:46.233479535Z" level=info msg="StartContainer for \"8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24\" returns successfully" Jan 24 00:55:46.807545 kubelet[2926]: E0124 00:55:46.806962 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:47.185763 kubelet[2926]: E0124 00:55:47.180895 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:47.820498 kubelet[2926]: E0124 00:55:47.820195 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:48.162031 kubelet[2926]: E0124 00:55:48.157061 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:48.808758 kubelet[2926]: E0124 00:55:48.806723 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:50.605942 systemd[1]: cri-containerd-8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24.scope: Deactivated successfully. Jan 24 00:55:50.608587 systemd[1]: cri-containerd-8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24.scope: Consumed 2.632s CPU time, 183.5M memory peak, 3.5M read from disk, 171.3M written to disk. Jan 24 00:55:50.612000 audit: BPF prog-id=169 op=UNLOAD Jan 24 00:55:50.621589 containerd[1614]: time="2026-01-24T00:55:50.620972997Z" level=info msg="received container exit event container_id:\"8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24\" id:\"8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24\" pid:3753 exited_at:{seconds:1769216150 nanos:606821188}" Jan 24 00:55:50.634715 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 24 00:55:50.634867 kernel: audit: type=1334 audit(1769216150.612:575): prog-id=169 op=UNLOAD Jan 24 00:55:50.853833 kubelet[2926]: E0124 00:55:50.851772 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:50.914724 kubelet[2926]: I0124 00:55:50.906437 2926 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:55:51.054590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24-rootfs.mount: Deactivated successfully. Jan 24 00:55:51.229469 systemd[1]: Created slice kubepods-besteffort-podda81df37_b61c_4bd8_af14_44830145232f.slice - libcontainer container kubepods-besteffort-podda81df37_b61c_4bd8_af14_44830145232f.slice. Jan 24 00:55:51.269237 kubelet[2926]: I0124 00:55:51.269093 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da81df37-b61c-4bd8-af14-44830145232f-whisker-backend-key-pair\") pod \"whisker-658695dc4c-85mkw\" (UID: \"da81df37-b61c-4bd8-af14-44830145232f\") " pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:55:51.273440 kubelet[2926]: I0124 00:55:51.272490 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da81df37-b61c-4bd8-af14-44830145232f-whisker-ca-bundle\") pod \"whisker-658695dc4c-85mkw\" (UID: \"da81df37-b61c-4bd8-af14-44830145232f\") " pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:55:51.273440 kubelet[2926]: I0124 00:55:51.272722 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk75z\" (UniqueName: \"kubernetes.io/projected/da81df37-b61c-4bd8-af14-44830145232f-kube-api-access-rk75z\") pod \"whisker-658695dc4c-85mkw\" (UID: \"da81df37-b61c-4bd8-af14-44830145232f\") " pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:55:51.273440 kubelet[2926]: I0124 00:55:51.272765 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4h5\" (UniqueName: \"kubernetes.io/projected/12fcbf43-6c31-4160-9172-b8eee7f25a4a-kube-api-access-tf4h5\") pod \"calico-apiserver-6b956dc89b-tb4xl\" (UID: \"12fcbf43-6c31-4160-9172-b8eee7f25a4a\") " pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:55:51.273440 kubelet[2926]: I0124 00:55:51.272846 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12fcbf43-6c31-4160-9172-b8eee7f25a4a-calico-apiserver-certs\") pod \"calico-apiserver-6b956dc89b-tb4xl\" (UID: \"12fcbf43-6c31-4160-9172-b8eee7f25a4a\") " pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:55:51.317491 systemd[1]: Created slice kubepods-besteffort-pod12fcbf43_6c31_4160_9172_b8eee7f25a4a.slice - libcontainer container kubepods-besteffort-pod12fcbf43_6c31_4160_9172_b8eee7f25a4a.slice. Jan 24 00:55:51.386950 kubelet[2926]: I0124 00:55:51.375610 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3894e361-72d8-4a9b-bd6a-f0764e209428-config-volume\") pod \"coredns-668d6bf9bc-5fkkt\" (UID: \"3894e361-72d8-4a9b-bd6a-f0764e209428\") " pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:55:51.384760 systemd[1]: Created slice kubepods-burstable-pod3894e361_72d8_4a9b_bd6a_f0764e209428.slice - libcontainer container kubepods-burstable-pod3894e361_72d8_4a9b_bd6a_f0764e209428.slice. Jan 24 00:55:51.400761 kubelet[2926]: I0124 00:55:51.397183 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzxl\" (UniqueName: \"kubernetes.io/projected/c89836fa-dd95-4cb6-925a-be9fc6a96ed3-kube-api-access-brzxl\") pod \"goldmane-666569f655-5tr2c\" (UID: \"c89836fa-dd95-4cb6-925a-be9fc6a96ed3\") " pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:55:51.400761 kubelet[2926]: I0124 00:55:51.397411 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28e628c9-c589-459d-8af8-92dc28cf7661-config-volume\") pod \"coredns-668d6bf9bc-cb6gm\" (UID: \"28e628c9-c589-459d-8af8-92dc28cf7661\") " pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:55:51.400761 kubelet[2926]: I0124 00:55:51.397448 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrgw\" (UniqueName: \"kubernetes.io/projected/28e628c9-c589-459d-8af8-92dc28cf7661-kube-api-access-wkrgw\") pod \"coredns-668d6bf9bc-cb6gm\" (UID: \"28e628c9-c589-459d-8af8-92dc28cf7661\") " pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:55:51.400761 kubelet[2926]: I0124 00:55:51.397486 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f065d32-b76b-4b45-b859-d08ade23f4a2-tigera-ca-bundle\") pod \"calico-kube-controllers-7797c85599-jzs5d\" (UID: \"1f065d32-b76b-4b45-b859-d08ade23f4a2\") " pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:55:51.400761 kubelet[2926]: I0124 00:55:51.397558 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89836fa-dd95-4cb6-925a-be9fc6a96ed3-config\") pod \"goldmane-666569f655-5tr2c\" (UID: \"c89836fa-dd95-4cb6-925a-be9fc6a96ed3\") " pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:55:51.401167 kubelet[2926]: I0124 00:55:51.397585 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c89836fa-dd95-4cb6-925a-be9fc6a96ed3-goldmane-key-pair\") pod \"goldmane-666569f655-5tr2c\" (UID: \"c89836fa-dd95-4cb6-925a-be9fc6a96ed3\") " pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:55:51.401167 kubelet[2926]: I0124 00:55:51.397627 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzx6z\" (UniqueName: \"kubernetes.io/projected/bf242984-56a4-4914-9f0b-44fbe621897e-kube-api-access-qzx6z\") pod \"calico-apiserver-6b956dc89b-8vwmm\" (UID: \"bf242984-56a4-4914-9f0b-44fbe621897e\") " pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:55:51.401167 kubelet[2926]: I0124 00:55:51.397654 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5cmm\" (UniqueName: \"kubernetes.io/projected/1f065d32-b76b-4b45-b859-d08ade23f4a2-kube-api-access-r5cmm\") pod \"calico-kube-controllers-7797c85599-jzs5d\" (UID: \"1f065d32-b76b-4b45-b859-d08ade23f4a2\") " pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:55:51.401167 kubelet[2926]: I0124 00:55:51.397693 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkwq\" (UniqueName: \"kubernetes.io/projected/3894e361-72d8-4a9b-bd6a-f0764e209428-kube-api-access-lxkwq\") pod \"coredns-668d6bf9bc-5fkkt\" (UID: \"3894e361-72d8-4a9b-bd6a-f0764e209428\") " pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:55:51.401167 kubelet[2926]: I0124 00:55:51.397713 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c89836fa-dd95-4cb6-925a-be9fc6a96ed3-goldmane-ca-bundle\") pod \"goldmane-666569f655-5tr2c\" (UID: \"c89836fa-dd95-4cb6-925a-be9fc6a96ed3\") " pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:55:51.401573 kubelet[2926]: I0124 00:55:51.397738 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf242984-56a4-4914-9f0b-44fbe621897e-calico-apiserver-certs\") pod \"calico-apiserver-6b956dc89b-8vwmm\" (UID: \"bf242984-56a4-4914-9f0b-44fbe621897e\") " pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:55:51.575078 systemd[1]: Created slice kubepods-besteffort-podc89836fa_dd95_4cb6_925a_be9fc6a96ed3.slice - libcontainer container kubepods-besteffort-podc89836fa_dd95_4cb6_925a_be9fc6a96ed3.slice. Jan 24 00:55:51.701857 containerd[1614]: time="2026-01-24T00:55:51.699833032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:55:51.724637 systemd[1]: Created slice kubepods-besteffort-pod1f065d32_b76b_4b45_b859_d08ade23f4a2.slice - libcontainer container kubepods-besteffort-pod1f065d32_b76b_4b45_b859_d08ade23f4a2.slice. Jan 24 00:55:51.785889 containerd[1614]: time="2026-01-24T00:55:51.785071387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,}" Jan 24 00:55:51.801597 kubelet[2926]: E0124 00:55:51.800425 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:51.825522 systemd[1]: Created slice kubepods-besteffort-podbf242984_56a4_4914_9f0b_44fbe621897e.slice - libcontainer container kubepods-besteffort-podbf242984_56a4_4914_9f0b_44fbe621897e.slice. Jan 24 00:55:51.838855 containerd[1614]: time="2026-01-24T00:55:51.837769625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,}" Jan 24 00:55:51.878523 systemd[1]: Created slice kubepods-burstable-pod28e628c9_c589_459d_8af8_92dc28cf7661.slice - libcontainer container kubepods-burstable-pod28e628c9_c589_459d_8af8_92dc28cf7661.slice. Jan 24 00:55:51.902804 containerd[1614]: time="2026-01-24T00:55:51.902746087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:55:51.927535 containerd[1614]: time="2026-01-24T00:55:51.926884272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,}" Jan 24 00:55:51.949098 kubelet[2926]: E0124 00:55:51.943232 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:52.070579 containerd[1614]: time="2026-01-24T00:55:52.070531390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,}" Jan 24 00:55:52.100692 containerd[1614]: time="2026-01-24T00:55:52.100554653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,}" Jan 24 00:55:52.600851 kubelet[2926]: E0124 00:55:52.600228 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:52.671082 containerd[1614]: time="2026-01-24T00:55:52.670098046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:55:52.890743 systemd[1]: Created slice kubepods-besteffort-pod1351988d_2da1_448e_bfda_fb7490691684.slice - libcontainer container kubepods-besteffort-pod1351988d_2da1_448e_bfda_fb7490691684.slice. Jan 24 00:55:52.909770 containerd[1614]: time="2026-01-24T00:55:52.909713288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,}" Jan 24 00:55:53.340524 containerd[1614]: time="2026-01-24T00:55:53.340465308Z" level=error msg="Failed to destroy network for sandbox \"02121a50cf23b490b95546257427b4df17d9b4f6e7450e7233f24f8625d00db6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.350230 systemd[1]: run-netns-cni\x2de8f6ff45\x2d8218\x2d5f1f\x2d2b2f\x2d27e61c185d59.mount: Deactivated successfully. Jan 24 00:55:53.374239 containerd[1614]: time="2026-01-24T00:55:53.374170061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02121a50cf23b490b95546257427b4df17d9b4f6e7450e7233f24f8625d00db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.378549 kubelet[2926]: E0124 00:55:53.378471 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02121a50cf23b490b95546257427b4df17d9b4f6e7450e7233f24f8625d00db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.379193 kubelet[2926]: E0124 00:55:53.378586 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02121a50cf23b490b95546257427b4df17d9b4f6e7450e7233f24f8625d00db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:55:53.379193 kubelet[2926]: E0124 00:55:53.378718 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02121a50cf23b490b95546257427b4df17d9b4f6e7450e7233f24f8625d00db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:55:53.379193 kubelet[2926]: E0124 00:55:53.378785 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02121a50cf23b490b95546257427b4df17d9b4f6e7450e7233f24f8625d00db6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658695dc4c-85mkw" podUID="da81df37-b61c-4bd8-af14-44830145232f" Jan 24 00:55:53.409881 containerd[1614]: time="2026-01-24T00:55:53.407519508Z" level=error msg="Failed to destroy network for sandbox \"6967df49185029fea0dfca963b5a554875120b230b83fd0da6d8008c93670212\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.416777 systemd[1]: run-netns-cni\x2ddac6fe6f\x2d1c9e\x2d164d\x2df7a7\x2d9075ec465157.mount: Deactivated successfully. Jan 24 00:55:53.454414 containerd[1614]: time="2026-01-24T00:55:53.454182564Z" level=error msg="Failed to destroy network for sandbox \"cf9b891d63087e5404cdbb2f89a4093f8f98e9c8cd937eea0a992a33b2222c03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.460219 systemd[1]: run-netns-cni\x2df114c536\x2d6d95\x2d11c4\x2dbdc5\x2dfda7a66c168b.mount: Deactivated successfully. Jan 24 00:55:53.504192 containerd[1614]: time="2026-01-24T00:55:53.472516724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6967df49185029fea0dfca963b5a554875120b230b83fd0da6d8008c93670212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.504192 containerd[1614]: time="2026-01-24T00:55:53.500210666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf9b891d63087e5404cdbb2f89a4093f8f98e9c8cd937eea0a992a33b2222c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.505241 kubelet[2926]: E0124 00:55:53.503890 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf9b891d63087e5404cdbb2f89a4093f8f98e9c8cd937eea0a992a33b2222c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.505241 kubelet[2926]: E0124 00:55:53.503963 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf9b891d63087e5404cdbb2f89a4093f8f98e9c8cd937eea0a992a33b2222c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:55:53.505241 kubelet[2926]: E0124 00:55:53.503991 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf9b891d63087e5404cdbb2f89a4093f8f98e9c8cd937eea0a992a33b2222c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:55:53.505241 kubelet[2926]: E0124 00:55:53.504010 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6967df49185029fea0dfca963b5a554875120b230b83fd0da6d8008c93670212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.506154 kubelet[2926]: E0124 00:55:53.504046 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf9b891d63087e5404cdbb2f89a4093f8f98e9c8cd937eea0a992a33b2222c03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:55:53.506154 kubelet[2926]: E0124 00:55:53.504054 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6967df49185029fea0dfca963b5a554875120b230b83fd0da6d8008c93670212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:55:53.506154 kubelet[2926]: E0124 00:55:53.504089 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6967df49185029fea0dfca963b5a554875120b230b83fd0da6d8008c93670212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:55:53.506896 kubelet[2926]: E0124 00:55:53.504121 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6967df49185029fea0dfca963b5a554875120b230b83fd0da6d8008c93670212\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:55:53.565765 containerd[1614]: time="2026-01-24T00:55:53.564632257Z" level=error msg="Failed to destroy network for sandbox \"ea7ad66577e685b20e19d986b17a06be38815da14956cdc0cb03588eeb2c99d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.587116 systemd[1]: run-netns-cni\x2d1dc3f43e\x2dafcd\x2d74d3\x2d51c1\x2d4071a105aec2.mount: Deactivated successfully. Jan 24 00:55:53.605541 containerd[1614]: time="2026-01-24T00:55:53.603413843Z" level=error msg="Failed to destroy network for sandbox \"3a81e89187b91b3016104f76f56d7a2684f53a233aeb6d702d6455f0e39a1385\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.618543 containerd[1614]: time="2026-01-24T00:55:53.618166678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ad66577e685b20e19d986b17a06be38815da14956cdc0cb03588eeb2c99d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.623119 kubelet[2926]: E0124 00:55:53.622987 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ad66577e685b20e19d986b17a06be38815da14956cdc0cb03588eeb2c99d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.623435 kubelet[2926]: E0124 00:55:53.623140 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ad66577e685b20e19d986b17a06be38815da14956cdc0cb03588eeb2c99d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:55:53.623435 kubelet[2926]: E0124 00:55:53.623167 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ad66577e685b20e19d986b17a06be38815da14956cdc0cb03588eeb2c99d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:55:53.623435 kubelet[2926]: E0124 00:55:53.623213 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea7ad66577e685b20e19d986b17a06be38815da14956cdc0cb03588eeb2c99d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:55:53.645529 containerd[1614]: time="2026-01-24T00:55:53.645112478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a81e89187b91b3016104f76f56d7a2684f53a233aeb6d702d6455f0e39a1385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.655860 kubelet[2926]: E0124 00:55:53.655388 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a81e89187b91b3016104f76f56d7a2684f53a233aeb6d702d6455f0e39a1385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.655986 kubelet[2926]: E0124 00:55:53.655880 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a81e89187b91b3016104f76f56d7a2684f53a233aeb6d702d6455f0e39a1385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:55:53.655986 kubelet[2926]: E0124 00:55:53.655914 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a81e89187b91b3016104f76f56d7a2684f53a233aeb6d702d6455f0e39a1385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:55:53.655986 kubelet[2926]: E0124 00:55:53.655966 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a81e89187b91b3016104f76f56d7a2684f53a233aeb6d702d6455f0e39a1385\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5fkkt" podUID="3894e361-72d8-4a9b-bd6a-f0764e209428" Jan 24 00:55:53.715048 containerd[1614]: time="2026-01-24T00:55:53.714569808Z" level=error msg="Failed to destroy network for sandbox \"a85d1f048c4e205e39e13d83cac6b66f4ba3639a11ff48a473a3f0d8bad1caab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.729512 containerd[1614]: time="2026-01-24T00:55:53.729453471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d1f048c4e205e39e13d83cac6b66f4ba3639a11ff48a473a3f0d8bad1caab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.731936 containerd[1614]: time="2026-01-24T00:55:53.730864273Z" level=error msg="Failed to destroy network for sandbox \"8a5e167a5dbea8c29d3273a7aa20dacf87bb570676ea30fedfb95eced3c32295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.733489 kubelet[2926]: E0124 00:55:53.733420 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d1f048c4e205e39e13d83cac6b66f4ba3639a11ff48a473a3f0d8bad1caab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.743592 kubelet[2926]: E0124 00:55:53.741510 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d1f048c4e205e39e13d83cac6b66f4ba3639a11ff48a473a3f0d8bad1caab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:55:53.743592 kubelet[2926]: E0124 00:55:53.741569 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85d1f048c4e205e39e13d83cac6b66f4ba3639a11ff48a473a3f0d8bad1caab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:55:53.745335 kubelet[2926]: E0124 00:55:53.744648 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a85d1f048c4e205e39e13d83cac6b66f4ba3639a11ff48a473a3f0d8bad1caab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:55:53.756457 containerd[1614]: time="2026-01-24T00:55:53.755712974Z" level=error msg="Failed to destroy network for sandbox \"c1e30607cdfc7f4005ef4e10f2d13c495a46ba54e49197e06896347ee61370bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.773242 containerd[1614]: time="2026-01-24T00:55:53.773004925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5e167a5dbea8c29d3273a7aa20dacf87bb570676ea30fedfb95eced3c32295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.776994 kubelet[2926]: E0124 00:55:53.774142 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5e167a5dbea8c29d3273a7aa20dacf87bb570676ea30fedfb95eced3c32295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.776994 kubelet[2926]: E0124 00:55:53.774233 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5e167a5dbea8c29d3273a7aa20dacf87bb570676ea30fedfb95eced3c32295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:55:53.776994 kubelet[2926]: E0124 00:55:53.774454 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5e167a5dbea8c29d3273a7aa20dacf87bb570676ea30fedfb95eced3c32295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:55:53.777494 kubelet[2926]: E0124 00:55:53.777446 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a5e167a5dbea8c29d3273a7aa20dacf87bb570676ea30fedfb95eced3c32295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cb6gm" podUID="28e628c9-c589-459d-8af8-92dc28cf7661" Jan 24 00:55:53.794645 containerd[1614]: time="2026-01-24T00:55:53.793561325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e30607cdfc7f4005ef4e10f2d13c495a46ba54e49197e06896347ee61370bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.795916 kubelet[2926]: E0124 00:55:53.795518 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e30607cdfc7f4005ef4e10f2d13c495a46ba54e49197e06896347ee61370bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:55:53.795916 kubelet[2926]: E0124 00:55:53.795695 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e30607cdfc7f4005ef4e10f2d13c495a46ba54e49197e06896347ee61370bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:55:53.796189 kubelet[2926]: E0124 00:55:53.795973 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e30607cdfc7f4005ef4e10f2d13c495a46ba54e49197e06896347ee61370bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:55:53.796189 kubelet[2926]: E0124 00:55:53.796044 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1e30607cdfc7f4005ef4e10f2d13c495a46ba54e49197e06896347ee61370bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:55:53.819224 kubelet[2926]: E0124 00:55:53.819179 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:54.032470 systemd[1]: run-netns-cni\x2d59cb52bc\x2da80b\x2dd9ac\x2d9223\x2d55e35514bbaa.mount: Deactivated successfully. Jan 24 00:55:54.032632 systemd[1]: run-netns-cni\x2d3e056ead\x2d0a14\x2dc763\x2d7425\x2dcbf3cb7b2887.mount: Deactivated successfully. Jan 24 00:55:54.032849 systemd[1]: run-netns-cni\x2d7ceff900\x2d8d3b\x2d6f5b\x2d186d\x2d6373c1f585a0.mount: Deactivated successfully. Jan 24 00:55:54.032962 systemd[1]: run-netns-cni\x2d7f4e462b\x2da48f\x2debff\x2dbfd9\x2d96b8f5b03a41.mount: Deactivated successfully. Jan 24 00:56:00.817776 kubelet[2926]: E0124 00:56:00.816596 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:04.813709 containerd[1614]: time="2026-01-24T00:56:04.809025121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:05.293538 containerd[1614]: time="2026-01-24T00:56:05.292078954Z" level=error msg="Failed to destroy network for sandbox \"4e187906ec477cc2fe61cc471e42302d692048fd7eca189115d46abbca6556e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:05.302197 systemd[1]: run-netns-cni\x2d5cbbfe0d\x2de9ff\x2df18f\x2d3d18\x2d6a255128ad00.mount: Deactivated successfully. Jan 24 00:56:05.387712 containerd[1614]: time="2026-01-24T00:56:05.387569688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e187906ec477cc2fe61cc471e42302d692048fd7eca189115d46abbca6556e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:05.391128 kubelet[2926]: E0124 00:56:05.390990 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e187906ec477cc2fe61cc471e42302d692048fd7eca189115d46abbca6556e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:05.391960 kubelet[2926]: E0124 00:56:05.391165 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e187906ec477cc2fe61cc471e42302d692048fd7eca189115d46abbca6556e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:05.391960 kubelet[2926]: E0124 00:56:05.391207 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e187906ec477cc2fe61cc471e42302d692048fd7eca189115d46abbca6556e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:05.391960 kubelet[2926]: E0124 00:56:05.391464 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e187906ec477cc2fe61cc471e42302d692048fd7eca189115d46abbca6556e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:56:05.819704 containerd[1614]: time="2026-01-24T00:56:05.816976202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:06.361650 containerd[1614]: time="2026-01-24T00:56:06.361118662Z" level=error msg="Failed to destroy network for sandbox \"f4c6136a618151e966c02e9080ff9b8e739d1b4241f616eef9cdfef4e4f214e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:06.372722 systemd[1]: run-netns-cni\x2d8917bf24\x2d0040\x2d4c6e\x2d518b\x2d3b0afc1a8a1f.mount: Deactivated successfully. Jan 24 00:56:06.391493 containerd[1614]: time="2026-01-24T00:56:06.391064940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c6136a618151e966c02e9080ff9b8e739d1b4241f616eef9cdfef4e4f214e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:06.394663 kubelet[2926]: E0124 00:56:06.393952 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c6136a618151e966c02e9080ff9b8e739d1b4241f616eef9cdfef4e4f214e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:06.394663 kubelet[2926]: E0124 00:56:06.394143 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c6136a618151e966c02e9080ff9b8e739d1b4241f616eef9cdfef4e4f214e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:56:06.394663 kubelet[2926]: E0124 00:56:06.394178 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c6136a618151e966c02e9080ff9b8e739d1b4241f616eef9cdfef4e4f214e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:56:06.400524 kubelet[2926]: E0124 00:56:06.394237 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4c6136a618151e966c02e9080ff9b8e739d1b4241f616eef9cdfef4e4f214e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:56:06.821130 kubelet[2926]: E0124 00:56:06.819036 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:06.832786 containerd[1614]: time="2026-01-24T00:56:06.832160005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:06.835056 containerd[1614]: time="2026-01-24T00:56:06.834905792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:07.573161 containerd[1614]: time="2026-01-24T00:56:07.572021855Z" level=error msg="Failed to destroy network for sandbox \"40e49cec09c9afa0e64b1e276183ec16c68b07cdd5363b866da472a07fecbcc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:07.581950 systemd[1]: run-netns-cni\x2d2cf851d8\x2dba2a\x2d1f61\x2d074b\x2dce4ddd94311a.mount: Deactivated successfully. Jan 24 00:56:07.599474 containerd[1614]: time="2026-01-24T00:56:07.597611964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e49cec09c9afa0e64b1e276183ec16c68b07cdd5363b866da472a07fecbcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:07.599940 kubelet[2926]: E0124 00:56:07.598991 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e49cec09c9afa0e64b1e276183ec16c68b07cdd5363b866da472a07fecbcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:07.599940 kubelet[2926]: E0124 00:56:07.599073 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e49cec09c9afa0e64b1e276183ec16c68b07cdd5363b866da472a07fecbcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:56:07.599940 kubelet[2926]: E0124 00:56:07.599109 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e49cec09c9afa0e64b1e276183ec16c68b07cdd5363b866da472a07fecbcc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:56:07.602219 kubelet[2926]: E0124 00:56:07.599171 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40e49cec09c9afa0e64b1e276183ec16c68b07cdd5363b866da472a07fecbcc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:56:07.686628 containerd[1614]: time="2026-01-24T00:56:07.684770409Z" level=error msg="Failed to destroy network for sandbox \"71462a085f117701f5eaaea85b1f4eff2ef03e6f640faf91e61b617f0e0d92cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:07.691124 systemd[1]: run-netns-cni\x2d6c6aa410\x2df177\x2d04c2\x2da105\x2dd83fd95252ab.mount: Deactivated successfully. Jan 24 00:56:07.702558 containerd[1614]: time="2026-01-24T00:56:07.700168338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71462a085f117701f5eaaea85b1f4eff2ef03e6f640faf91e61b617f0e0d92cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:07.704629 kubelet[2926]: E0124 00:56:07.703063 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71462a085f117701f5eaaea85b1f4eff2ef03e6f640faf91e61b617f0e0d92cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:07.704629 kubelet[2926]: E0124 00:56:07.703141 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71462a085f117701f5eaaea85b1f4eff2ef03e6f640faf91e61b617f0e0d92cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:56:07.704629 kubelet[2926]: E0124 00:56:07.703169 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71462a085f117701f5eaaea85b1f4eff2ef03e6f640faf91e61b617f0e0d92cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:56:07.704909 kubelet[2926]: E0124 00:56:07.703220 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71462a085f117701f5eaaea85b1f4eff2ef03e6f640faf91e61b617f0e0d92cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cb6gm" podUID="28e628c9-c589-459d-8af8-92dc28cf7661" Jan 24 00:56:07.820596 kubelet[2926]: E0124 00:56:07.820554 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:07.827762 containerd[1614]: time="2026-01-24T00:56:07.822040591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:07.847187 containerd[1614]: time="2026-01-24T00:56:07.830124888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:08.349910 containerd[1614]: time="2026-01-24T00:56:08.344168424Z" level=error msg="Failed to destroy network for sandbox \"6d0512d061c48a1eec7945995ca9a5ce67071f9646fba737bbe295e08271fbb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:08.364873 systemd[1]: run-netns-cni\x2db8276b08\x2d37e8\x2d428e\x2d6f7e\x2d9ddf8618cd4a.mount: Deactivated successfully. Jan 24 00:56:08.396090 containerd[1614]: time="2026-01-24T00:56:08.392741878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0512d061c48a1eec7945995ca9a5ce67071f9646fba737bbe295e08271fbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:08.402723 kubelet[2926]: E0124 00:56:08.397807 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0512d061c48a1eec7945995ca9a5ce67071f9646fba737bbe295e08271fbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:08.402723 kubelet[2926]: E0124 00:56:08.397890 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0512d061c48a1eec7945995ca9a5ce67071f9646fba737bbe295e08271fbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:56:08.402723 kubelet[2926]: E0124 00:56:08.397919 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0512d061c48a1eec7945995ca9a5ce67071f9646fba737bbe295e08271fbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:56:08.402890 kubelet[2926]: E0124 00:56:08.397974 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0512d061c48a1eec7945995ca9a5ce67071f9646fba737bbe295e08271fbb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:56:08.543217 containerd[1614]: time="2026-01-24T00:56:08.537825364Z" level=error msg="Failed to destroy network for sandbox \"bcc6e54a6543b5a45d3028ad8f1d5bdc868d35d7cdff81c0e0793b9209a3ffb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:08.565955 systemd[1]: run-netns-cni\x2d7911abe1\x2d2dbe\x2da4e7\x2d363a\x2dab34ee2ea966.mount: Deactivated successfully. Jan 24 00:56:08.577748 containerd[1614]: time="2026-01-24T00:56:08.574929825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc6e54a6543b5a45d3028ad8f1d5bdc868d35d7cdff81c0e0793b9209a3ffb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:08.589180 kubelet[2926]: E0124 00:56:08.580138 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc6e54a6543b5a45d3028ad8f1d5bdc868d35d7cdff81c0e0793b9209a3ffb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:08.589180 kubelet[2926]: E0124 00:56:08.580220 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc6e54a6543b5a45d3028ad8f1d5bdc868d35d7cdff81c0e0793b9209a3ffb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:56:08.589180 kubelet[2926]: E0124 00:56:08.581776 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc6e54a6543b5a45d3028ad8f1d5bdc868d35d7cdff81c0e0793b9209a3ffb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:56:08.601702 kubelet[2926]: E0124 00:56:08.590826 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcc6e54a6543b5a45d3028ad8f1d5bdc868d35d7cdff81c0e0793b9209a3ffb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5fkkt" podUID="3894e361-72d8-4a9b-bd6a-f0764e209428" Jan 24 00:56:08.812107 containerd[1614]: time="2026-01-24T00:56:08.812050114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:08.824196 containerd[1614]: time="2026-01-24T00:56:08.821544001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:09.284073 containerd[1614]: time="2026-01-24T00:56:09.283700425Z" level=error msg="Failed to destroy network for sandbox \"b220d6a13cd4fc3b25d90ad3b569e700743a544d202353a8a50ca778be3b1e6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:09.289086 systemd[1]: run-netns-cni\x2da4ef9220\x2d922f\x2d99ed\x2d31ff\x2d91db8013a230.mount: Deactivated successfully. Jan 24 00:56:09.310651 containerd[1614]: time="2026-01-24T00:56:09.307669540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b220d6a13cd4fc3b25d90ad3b569e700743a544d202353a8a50ca778be3b1e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:09.318100 kubelet[2926]: E0124 00:56:09.314424 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b220d6a13cd4fc3b25d90ad3b569e700743a544d202353a8a50ca778be3b1e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:09.318100 kubelet[2926]: E0124 00:56:09.314513 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b220d6a13cd4fc3b25d90ad3b569e700743a544d202353a8a50ca778be3b1e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:56:09.318100 kubelet[2926]: E0124 00:56:09.314668 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b220d6a13cd4fc3b25d90ad3b569e700743a544d202353a8a50ca778be3b1e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:56:09.320112 kubelet[2926]: E0124 00:56:09.314729 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b220d6a13cd4fc3b25d90ad3b569e700743a544d202353a8a50ca778be3b1e6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:56:09.348190 containerd[1614]: time="2026-01-24T00:56:09.346913341Z" level=error msg="Failed to destroy network for sandbox \"fef8281ec0eeb76ff7df074c950efd39a7f3535da7c4632b88276a23feb491ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:09.358656 containerd[1614]: time="2026-01-24T00:56:09.358459858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef8281ec0eeb76ff7df074c950efd39a7f3535da7c4632b88276a23feb491ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:09.359233 systemd[1]: run-netns-cni\x2db9094ad9\x2d4391\x2d55d9\x2d5227\x2dae692f70738a.mount: Deactivated successfully. Jan 24 00:56:09.364486 kubelet[2926]: E0124 00:56:09.364094 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef8281ec0eeb76ff7df074c950efd39a7f3535da7c4632b88276a23feb491ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:09.364486 kubelet[2926]: E0124 00:56:09.364170 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef8281ec0eeb76ff7df074c950efd39a7f3535da7c4632b88276a23feb491ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:56:09.364486 kubelet[2926]: E0124 00:56:09.364203 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef8281ec0eeb76ff7df074c950efd39a7f3535da7c4632b88276a23feb491ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:56:09.368232 kubelet[2926]: E0124 00:56:09.365818 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fef8281ec0eeb76ff7df074c950efd39a7f3535da7c4632b88276a23feb491ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658695dc4c-85mkw" podUID="da81df37-b61c-4bd8-af14-44830145232f" Jan 24 00:56:15.832690 containerd[1614]: time="2026-01-24T00:56:15.825944045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:16.734750 containerd[1614]: time="2026-01-24T00:56:16.714893570Z" level=error msg="Failed to destroy network for sandbox \"31d1faf4bf6912dec2fde9422197fd9a046dc4a497a51344234225b081e16402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:16.762927 systemd[1]: run-netns-cni\x2d8eb8b8dd\x2d312f\x2dd615\x2d4ddf\x2d57676f057c71.mount: Deactivated successfully. Jan 24 00:56:16.828693 containerd[1614]: time="2026-01-24T00:56:16.827483222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d1faf4bf6912dec2fde9422197fd9a046dc4a497a51344234225b081e16402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:16.832566 kubelet[2926]: E0124 00:56:16.830549 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d1faf4bf6912dec2fde9422197fd9a046dc4a497a51344234225b081e16402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:16.839500 kubelet[2926]: E0124 00:56:16.834485 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d1faf4bf6912dec2fde9422197fd9a046dc4a497a51344234225b081e16402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:16.839500 kubelet[2926]: E0124 00:56:16.834540 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d1faf4bf6912dec2fde9422197fd9a046dc4a497a51344234225b081e16402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:16.839500 kubelet[2926]: E0124 00:56:16.834601 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31d1faf4bf6912dec2fde9422197fd9a046dc4a497a51344234225b081e16402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:56:18.813694 containerd[1614]: time="2026-01-24T00:56:18.813536779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:19.489853 containerd[1614]: time="2026-01-24T00:56:19.488883098Z" level=error msg="Failed to destroy network for sandbox \"2745b2cf74b17691f51f227776cf2561e94a593607492270bbb169df7e035b10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:19.526791 systemd[1]: run-netns-cni\x2dc1d7abc4\x2d1fda\x2d934c\x2d999e\x2d5c6f224a8c94.mount: Deactivated successfully. Jan 24 00:56:19.588518 containerd[1614]: time="2026-01-24T00:56:19.588443241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2745b2cf74b17691f51f227776cf2561e94a593607492270bbb169df7e035b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:19.595607 kubelet[2926]: E0124 00:56:19.593581 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2745b2cf74b17691f51f227776cf2561e94a593607492270bbb169df7e035b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:19.595607 kubelet[2926]: E0124 00:56:19.593742 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2745b2cf74b17691f51f227776cf2561e94a593607492270bbb169df7e035b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:56:19.595607 kubelet[2926]: E0124 00:56:19.593772 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2745b2cf74b17691f51f227776cf2561e94a593607492270bbb169df7e035b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:56:19.596533 kubelet[2926]: E0124 00:56:19.593832 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2745b2cf74b17691f51f227776cf2561e94a593607492270bbb169df7e035b10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:56:19.831147 containerd[1614]: time="2026-01-24T00:56:19.826802966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:19.873169 containerd[1614]: time="2026-01-24T00:56:19.864101371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:20.867509 containerd[1614]: time="2026-01-24T00:56:20.863923115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:20.875467 containerd[1614]: time="2026-01-24T00:56:20.870200626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:21.684981 containerd[1614]: time="2026-01-24T00:56:21.684923036Z" level=error msg="Failed to destroy network for sandbox \"049303d33b71680622f634da4865f68e8dfbe131629a8b72025934cb7b995353\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:21.714549 systemd[1]: run-netns-cni\x2df44454a7\x2dca54\x2da876\x2ddc69\x2d3ae1e62a9576.mount: Deactivated successfully. Jan 24 00:56:21.816595 containerd[1614]: time="2026-01-24T00:56:21.816134581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"049303d33b71680622f634da4865f68e8dfbe131629a8b72025934cb7b995353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:21.819682 kubelet[2926]: E0124 00:56:21.817117 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:21.825166 containerd[1614]: time="2026-01-24T00:56:21.823113256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:21.828967 kubelet[2926]: E0124 00:56:21.826859 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"049303d33b71680622f634da4865f68e8dfbe131629a8b72025934cb7b995353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:21.828967 kubelet[2926]: E0124 00:56:21.826943 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"049303d33b71680622f634da4865f68e8dfbe131629a8b72025934cb7b995353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:56:21.828967 kubelet[2926]: E0124 00:56:21.826974 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"049303d33b71680622f634da4865f68e8dfbe131629a8b72025934cb7b995353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:56:21.829154 kubelet[2926]: E0124 00:56:21.827028 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"049303d33b71680622f634da4865f68e8dfbe131629a8b72025934cb7b995353\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:56:21.923628 containerd[1614]: time="2026-01-24T00:56:21.919723504Z" level=error msg="Failed to destroy network for sandbox \"c008dd40835041ff8dbea595c3da89fe1decfbc1778afbfedd3e4d38dbaa1d3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:21.927672 systemd[1]: run-netns-cni\x2d83ddab8b\x2d2569\x2df77c\x2d7d79\x2d32b7d848d43d.mount: Deactivated successfully. Jan 24 00:56:22.006923 containerd[1614]: time="2026-01-24T00:56:22.002636903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c008dd40835041ff8dbea595c3da89fe1decfbc1778afbfedd3e4d38dbaa1d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.033178 kubelet[2926]: E0124 00:56:22.027207 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c008dd40835041ff8dbea595c3da89fe1decfbc1778afbfedd3e4d38dbaa1d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.033178 kubelet[2926]: E0124 00:56:22.027483 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c008dd40835041ff8dbea595c3da89fe1decfbc1778afbfedd3e4d38dbaa1d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:56:22.033178 kubelet[2926]: E0124 00:56:22.027665 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c008dd40835041ff8dbea595c3da89fe1decfbc1778afbfedd3e4d38dbaa1d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:56:22.033672 kubelet[2926]: E0124 00:56:22.027723 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c008dd40835041ff8dbea595c3da89fe1decfbc1778afbfedd3e4d38dbaa1d3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:56:22.650242 containerd[1614]: time="2026-01-24T00:56:22.645439673Z" level=error msg="Failed to destroy network for sandbox \"70f4c5f019379e138a04ccf19d65771941afebb79b189e08add5747c6b306d7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.657236 systemd[1]: run-netns-cni\x2db84642be\x2d188a\x2d841d\x2d4291\x2d09769d0c92e6.mount: Deactivated successfully. Jan 24 00:56:22.661914 containerd[1614]: time="2026-01-24T00:56:22.658506224Z" level=error msg="Failed to destroy network for sandbox \"84dbd3efd46e0e4bb2cfb87f8924e8e4d75b28360f30d2ae83253a9272fd15f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.670614 systemd[1]: run-netns-cni\x2d866c4ab3\x2d82cd\x2df9e4\x2da980\x2dceb323fd5d78.mount: Deactivated successfully. Jan 24 00:56:22.691672 containerd[1614]: time="2026-01-24T00:56:22.689857439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f4c5f019379e138a04ccf19d65771941afebb79b189e08add5747c6b306d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.726518 containerd[1614]: time="2026-01-24T00:56:22.725559254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbd3efd46e0e4bb2cfb87f8924e8e4d75b28360f30d2ae83253a9272fd15f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.744150 kubelet[2926]: E0124 00:56:22.741995 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbd3efd46e0e4bb2cfb87f8924e8e4d75b28360f30d2ae83253a9272fd15f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.744150 kubelet[2926]: E0124 00:56:22.742070 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbd3efd46e0e4bb2cfb87f8924e8e4d75b28360f30d2ae83253a9272fd15f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:56:22.744150 kubelet[2926]: E0124 00:56:22.742101 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbd3efd46e0e4bb2cfb87f8924e8e4d75b28360f30d2ae83253a9272fd15f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:56:22.751178 kubelet[2926]: E0124 00:56:22.742208 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84dbd3efd46e0e4bb2cfb87f8924e8e4d75b28360f30d2ae83253a9272fd15f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:56:22.751178 kubelet[2926]: E0124 00:56:22.742468 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f4c5f019379e138a04ccf19d65771941afebb79b189e08add5747c6b306d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.751178 kubelet[2926]: E0124 00:56:22.742509 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f4c5f019379e138a04ccf19d65771941afebb79b189e08add5747c6b306d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:56:22.774837 kubelet[2926]: E0124 00:56:22.742533 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f4c5f019379e138a04ccf19d65771941afebb79b189e08add5747c6b306d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:56:22.774837 kubelet[2926]: E0124 00:56:22.742580 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70f4c5f019379e138a04ccf19d65771941afebb79b189e08add5747c6b306d7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658695dc4c-85mkw" podUID="da81df37-b61c-4bd8-af14-44830145232f" Jan 24 00:56:22.804055 containerd[1614]: time="2026-01-24T00:56:22.795569064Z" level=error msg="Failed to destroy network for sandbox \"3b7a2c8b89615c9e913a6da2c9d3979452262c850a54ad8483dae229c2aeeb28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.847861 systemd[1]: run-netns-cni\x2d94cd158e\x2db655\x2d1c8a\x2d98b2\x2d0e51f35ffa7e.mount: Deactivated successfully. Jan 24 00:56:22.866193 containerd[1614]: time="2026-01-24T00:56:22.857221397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:22.866461 kubelet[2926]: E0124 00:56:22.848668 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:22.892918 containerd[1614]: time="2026-01-24T00:56:22.892099616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7a2c8b89615c9e913a6da2c9d3979452262c850a54ad8483dae229c2aeeb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.893866 kubelet[2926]: E0124 00:56:22.893044 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7a2c8b89615c9e913a6da2c9d3979452262c850a54ad8483dae229c2aeeb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:22.893866 kubelet[2926]: E0124 00:56:22.893107 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7a2c8b89615c9e913a6da2c9d3979452262c850a54ad8483dae229c2aeeb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:56:22.893866 kubelet[2926]: E0124 00:56:22.893144 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7a2c8b89615c9e913a6da2c9d3979452262c850a54ad8483dae229c2aeeb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:56:22.894048 kubelet[2926]: E0124 00:56:22.893192 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b7a2c8b89615c9e913a6da2c9d3979452262c850a54ad8483dae229c2aeeb28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cb6gm" podUID="28e628c9-c589-459d-8af8-92dc28cf7661" Jan 24 00:56:23.469390 containerd[1614]: time="2026-01-24T00:56:23.461223932Z" level=error msg="Failed to destroy network for sandbox \"e8d61f97a2d849b88ca47d31a16851cf2dc4591cfcd6dd098c9205435fcbe969\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:23.497549 systemd[1]: run-netns-cni\x2d442efd2f\x2d6ee0\x2d6ece\x2da9f9\x2d09cc21c97f03.mount: Deactivated successfully. Jan 24 00:56:23.555218 containerd[1614]: time="2026-01-24T00:56:23.554507993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d61f97a2d849b88ca47d31a16851cf2dc4591cfcd6dd098c9205435fcbe969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:23.555904 kubelet[2926]: E0124 00:56:23.555584 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d61f97a2d849b88ca47d31a16851cf2dc4591cfcd6dd098c9205435fcbe969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:23.555904 kubelet[2926]: E0124 00:56:23.555653 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d61f97a2d849b88ca47d31a16851cf2dc4591cfcd6dd098c9205435fcbe969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:56:23.555904 kubelet[2926]: E0124 00:56:23.555786 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d61f97a2d849b88ca47d31a16851cf2dc4591cfcd6dd098c9205435fcbe969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:56:23.556081 kubelet[2926]: E0124 00:56:23.555837 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8d61f97a2d849b88ca47d31a16851cf2dc4591cfcd6dd098c9205435fcbe969\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5fkkt" podUID="3894e361-72d8-4a9b-bd6a-f0764e209428" Jan 24 00:56:28.807776 containerd[1614]: time="2026-01-24T00:56:28.807220349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:29.206173 containerd[1614]: time="2026-01-24T00:56:29.206015808Z" level=error msg="Failed to destroy network for sandbox \"666618a6da97f388a82900c580d73ab1c9470617c5fdd1eb63ab811164652729\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:29.218832 systemd[1]: run-netns-cni\x2d7fdbc782\x2d28f7\x2d1377\x2d297a\x2df2b1cce3d16c.mount: Deactivated successfully. Jan 24 00:56:29.230137 containerd[1614]: time="2026-01-24T00:56:29.225747113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"666618a6da97f388a82900c580d73ab1c9470617c5fdd1eb63ab811164652729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:29.231050 kubelet[2926]: E0124 00:56:29.226623 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666618a6da97f388a82900c580d73ab1c9470617c5fdd1eb63ab811164652729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:29.231050 kubelet[2926]: E0124 00:56:29.226702 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666618a6da97f388a82900c580d73ab1c9470617c5fdd1eb63ab811164652729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:29.231050 kubelet[2926]: E0124 00:56:29.226730 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666618a6da97f388a82900c580d73ab1c9470617c5fdd1eb63ab811164652729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:29.233085 kubelet[2926]: E0124 00:56:29.226786 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"666618a6da97f388a82900c580d73ab1c9470617c5fdd1eb63ab811164652729\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:56:33.807472 kubelet[2926]: E0124 00:56:33.806848 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:33.849011 containerd[1614]: time="2026-01-24T00:56:33.846571541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:33.849011 containerd[1614]: time="2026-01-24T00:56:33.848460827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:34.578998 containerd[1614]: time="2026-01-24T00:56:34.578229601Z" level=error msg="Failed to destroy network for sandbox \"b8bbe481b015346eb8e9275c203adc78a44e0194ad170685b668f28820a6e50b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:34.583496 containerd[1614]: time="2026-01-24T00:56:34.580742825Z" level=error msg="Failed to destroy network for sandbox \"43558dcb05df3d10a1bb962f359f2d3f932569cabf50e5fd433b3fb9483c8ea9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:34.591982 systemd[1]: run-netns-cni\x2d50172e0c\x2d4f9a\x2d3afb\x2d1f89\x2df50571ed1edf.mount: Deactivated successfully. Jan 24 00:56:34.598882 containerd[1614]: time="2026-01-24T00:56:34.598753430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43558dcb05df3d10a1bb962f359f2d3f932569cabf50e5fd433b3fb9483c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:34.606587 kubelet[2926]: E0124 00:56:34.601743 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43558dcb05df3d10a1bb962f359f2d3f932569cabf50e5fd433b3fb9483c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:34.606587 kubelet[2926]: E0124 00:56:34.601818 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43558dcb05df3d10a1bb962f359f2d3f932569cabf50e5fd433b3fb9483c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:56:34.606587 kubelet[2926]: E0124 00:56:34.601851 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43558dcb05df3d10a1bb962f359f2d3f932569cabf50e5fd433b3fb9483c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cb6gm" Jan 24 00:56:34.606843 kubelet[2926]: E0124 00:56:34.601982 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cb6gm_kube-system(28e628c9-c589-459d-8af8-92dc28cf7661)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43558dcb05df3d10a1bb962f359f2d3f932569cabf50e5fd433b3fb9483c8ea9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cb6gm" podUID="28e628c9-c589-459d-8af8-92dc28cf7661" Jan 24 00:56:34.610568 systemd[1]: run-netns-cni\x2d79c48b66\x2d3d5f\x2d5309\x2d8726\x2d052172d45253.mount: Deactivated successfully. Jan 24 00:56:34.625739 containerd[1614]: time="2026-01-24T00:56:34.625521606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658695dc4c-85mkw,Uid:da81df37-b61c-4bd8-af14-44830145232f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bbe481b015346eb8e9275c203adc78a44e0194ad170685b668f28820a6e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:34.631718 kubelet[2926]: E0124 00:56:34.629079 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bbe481b015346eb8e9275c203adc78a44e0194ad170685b668f28820a6e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:34.653396 kubelet[2926]: E0124 00:56:34.641827 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bbe481b015346eb8e9275c203adc78a44e0194ad170685b668f28820a6e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:56:34.653396 kubelet[2926]: E0124 00:56:34.641870 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bbe481b015346eb8e9275c203adc78a44e0194ad170685b668f28820a6e50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658695dc4c-85mkw" Jan 24 00:56:34.653396 kubelet[2926]: E0124 00:56:34.641926 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-658695dc4c-85mkw_calico-system(da81df37-b61c-4bd8-af14-44830145232f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8bbe481b015346eb8e9275c203adc78a44e0194ad170685b668f28820a6e50b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658695dc4c-85mkw" podUID="da81df37-b61c-4bd8-af14-44830145232f" Jan 24 00:56:34.878068 containerd[1614]: time="2026-01-24T00:56:34.864111911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:34.878068 containerd[1614]: time="2026-01-24T00:56:34.871885980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:34.983040 containerd[1614]: time="2026-01-24T00:56:34.982982784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:35.833359 containerd[1614]: time="2026-01-24T00:56:35.818665602Z" level=error msg="Failed to destroy network for sandbox \"829bf31ffda45309c128114efd0ed75c9ce2b21f3d61c24b81acbe1fbf364c89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:35.828239 systemd[1]: run-netns-cni\x2d7dbcede3\x2d80bb\x2d47e9\x2d5e65\x2ddd148df83343.mount: Deactivated successfully. Jan 24 00:56:35.887606 containerd[1614]: time="2026-01-24T00:56:35.864609532Z" level=error msg="Failed to destroy network for sandbox \"3ed92c810e0b16dc59e47ec54bc6b2825d4370e7612a52bc284126f546a83776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:35.883482 systemd[1]: run-netns-cni\x2d380b1b03\x2d1495\x2d746d\x2d5fe2\x2d325baf388c00.mount: Deactivated successfully. Jan 24 00:56:35.889867 containerd[1614]: time="2026-01-24T00:56:35.889579384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:35.957684 containerd[1614]: time="2026-01-24T00:56:35.956586748Z" level=error msg="Failed to destroy network for sandbox \"d1c737943b947c07c7df7a5c2eff1f654bf5507afa6bb47f493e446dd2df2406\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:35.960546 containerd[1614]: time="2026-01-24T00:56:35.959934790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed92c810e0b16dc59e47ec54bc6b2825d4370e7612a52bc284126f546a83776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:35.972023 kubelet[2926]: E0124 00:56:35.962995 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed92c810e0b16dc59e47ec54bc6b2825d4370e7612a52bc284126f546a83776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:35.972023 kubelet[2926]: E0124 00:56:35.970874 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed92c810e0b16dc59e47ec54bc6b2825d4370e7612a52bc284126f546a83776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:56:35.972023 kubelet[2926]: E0124 00:56:35.970916 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed92c810e0b16dc59e47ec54bc6b2825d4370e7612a52bc284126f546a83776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" Jan 24 00:56:35.972957 containerd[1614]: time="2026-01-24T00:56:35.969766429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"829bf31ffda45309c128114efd0ed75c9ce2b21f3d61c24b81acbe1fbf364c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:35.968677 systemd[1]: run-netns-cni\x2d7f407199\x2d72d8\x2d4990\x2d3d75\x2dcae1f797e644.mount: Deactivated successfully. Jan 24 00:56:35.974656 kubelet[2926]: E0124 00:56:35.973931 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ed92c810e0b16dc59e47ec54bc6b2825d4370e7612a52bc284126f546a83776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:56:35.980670 kubelet[2926]: E0124 00:56:35.980630 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829bf31ffda45309c128114efd0ed75c9ce2b21f3d61c24b81acbe1fbf364c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:35.982882 kubelet[2926]: E0124 00:56:35.982853 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829bf31ffda45309c128114efd0ed75c9ce2b21f3d61c24b81acbe1fbf364c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:56:35.983830 kubelet[2926]: E0124 00:56:35.983803 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829bf31ffda45309c128114efd0ed75c9ce2b21f3d61c24b81acbe1fbf364c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" Jan 24 00:56:35.984188 kubelet[2926]: E0124 00:56:35.984054 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"829bf31ffda45309c128114efd0ed75c9ce2b21f3d61c24b81acbe1fbf364c89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:56:36.003548 containerd[1614]: time="2026-01-24T00:56:36.003494192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c737943b947c07c7df7a5c2eff1f654bf5507afa6bb47f493e446dd2df2406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:36.011816 kubelet[2926]: E0124 00:56:36.011763 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c737943b947c07c7df7a5c2eff1f654bf5507afa6bb47f493e446dd2df2406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:36.017752 kubelet[2926]: E0124 00:56:36.017715 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c737943b947c07c7df7a5c2eff1f654bf5507afa6bb47f493e446dd2df2406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:56:36.018687 kubelet[2926]: E0124 00:56:36.018660 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c737943b947c07c7df7a5c2eff1f654bf5507afa6bb47f493e446dd2df2406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" Jan 24 00:56:36.018820 kubelet[2926]: E0124 00:56:36.018794 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1c737943b947c07c7df7a5c2eff1f654bf5507afa6bb47f493e446dd2df2406\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:56:36.586442 containerd[1614]: time="2026-01-24T00:56:36.574785790Z" level=error msg="Failed to destroy network for sandbox \"79bf8f9e1308200218f9c65feefb81ffbe164f6382fc9ec31018fbb0aff5e61b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:36.594885 systemd[1]: run-netns-cni\x2dd617334b\x2d7daf\x2d51c9\x2dffa2\x2dffb0805923e1.mount: Deactivated successfully. Jan 24 00:56:36.629610 containerd[1614]: time="2026-01-24T00:56:36.624200725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bf8f9e1308200218f9c65feefb81ffbe164f6382fc9ec31018fbb0aff5e61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:36.657566 kubelet[2926]: E0124 00:56:36.657504 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bf8f9e1308200218f9c65feefb81ffbe164f6382fc9ec31018fbb0aff5e61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:36.657805 kubelet[2926]: E0124 00:56:36.657772 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bf8f9e1308200218f9c65feefb81ffbe164f6382fc9ec31018fbb0aff5e61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:56:36.657971 kubelet[2926]: E0124 00:56:36.657944 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bf8f9e1308200218f9c65feefb81ffbe164f6382fc9ec31018fbb0aff5e61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htrd2" Jan 24 00:56:36.658463 kubelet[2926]: E0124 00:56:36.658240 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79bf8f9e1308200218f9c65feefb81ffbe164f6382fc9ec31018fbb0aff5e61b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:56:38.810733 kubelet[2926]: E0124 00:56:38.810689 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:38.836626 containerd[1614]: time="2026-01-24T00:56:38.831123879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:39.594824 containerd[1614]: time="2026-01-24T00:56:39.594746548Z" level=error msg="Failed to destroy network for sandbox \"91338353c580a51ce62cdd1d562226ea2c3d463006dd058380cfe8f3f5c6fcbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:39.602884 systemd[1]: run-netns-cni\x2d1864d6f4\x2d4928\x2dc548\x2dc7e1\x2dc8b593f25c04.mount: Deactivated successfully. Jan 24 00:56:39.667677 containerd[1614]: time="2026-01-24T00:56:39.667230279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91338353c580a51ce62cdd1d562226ea2c3d463006dd058380cfe8f3f5c6fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:39.669852 kubelet[2926]: E0124 00:56:39.669583 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91338353c580a51ce62cdd1d562226ea2c3d463006dd058380cfe8f3f5c6fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:39.669852 kubelet[2926]: E0124 00:56:39.669662 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91338353c580a51ce62cdd1d562226ea2c3d463006dd058380cfe8f3f5c6fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:56:39.669852 kubelet[2926]: E0124 00:56:39.669697 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91338353c580a51ce62cdd1d562226ea2c3d463006dd058380cfe8f3f5c6fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5fkkt" Jan 24 00:56:39.673134 kubelet[2926]: E0124 00:56:39.669763 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5fkkt_kube-system(3894e361-72d8-4a9b-bd6a-f0764e209428)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91338353c580a51ce62cdd1d562226ea2c3d463006dd058380cfe8f3f5c6fcbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5fkkt" podUID="3894e361-72d8-4a9b-bd6a-f0764e209428" Jan 24 00:56:39.814654 containerd[1614]: time="2026-01-24T00:56:39.814504341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:40.296128 containerd[1614]: time="2026-01-24T00:56:40.295747438Z" level=error msg="Failed to destroy network for sandbox \"321f40085d8fa2e45358d3edd8164c9c781454906a81ed291f9a1e7b9dca5a78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:40.304146 systemd[1]: run-netns-cni\x2dcc6931d6\x2dec8f\x2da4b7\x2dbb95\x2d3ffc65185e96.mount: Deactivated successfully. Jan 24 00:56:40.318851 containerd[1614]: time="2026-01-24T00:56:40.318529101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"321f40085d8fa2e45358d3edd8164c9c781454906a81ed291f9a1e7b9dca5a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:40.319455 kubelet[2926]: E0124 00:56:40.319146 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"321f40085d8fa2e45358d3edd8164c9c781454906a81ed291f9a1e7b9dca5a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:40.319455 kubelet[2926]: E0124 00:56:40.319228 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"321f40085d8fa2e45358d3edd8164c9c781454906a81ed291f9a1e7b9dca5a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:40.323235 kubelet[2926]: E0124 00:56:40.319582 2926 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"321f40085d8fa2e45358d3edd8164c9c781454906a81ed291f9a1e7b9dca5a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5tr2c" Jan 24 00:56:40.323235 kubelet[2926]: E0124 00:56:40.319756 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"321f40085d8fa2e45358d3edd8164c9c781454906a81ed291f9a1e7b9dca5a78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:56:40.726729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530105109.mount: Deactivated successfully. Jan 24 00:56:40.855530 containerd[1614]: time="2026-01-24T00:56:40.851595781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:40.861458 containerd[1614]: time="2026-01-24T00:56:40.860705549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 00:56:40.869163 containerd[1614]: time="2026-01-24T00:56:40.867475329Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:40.890229 containerd[1614]: time="2026-01-24T00:56:40.889477220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:40.892852 containerd[1614]: time="2026-01-24T00:56:40.892184893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 48.221938744s" Jan 24 00:56:40.892852 containerd[1614]: time="2026-01-24T00:56:40.892591745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:56:40.997561 containerd[1614]: time="2026-01-24T00:56:40.995148517Z" level=info msg="CreateContainer within sandbox \"a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:56:41.127054 containerd[1614]: time="2026-01-24T00:56:41.124529217Z" level=info msg="Container fa35da4153afec9f5b122771727850dea85ed364bd3c41d231c4a49e7c687729: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:56:41.219155 containerd[1614]: time="2026-01-24T00:56:41.219074777Z" level=info msg="CreateContainer within sandbox \"a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fa35da4153afec9f5b122771727850dea85ed364bd3c41d231c4a49e7c687729\"" Jan 24 00:56:41.235237 containerd[1614]: time="2026-01-24T00:56:41.233604825Z" level=info msg="StartContainer for \"fa35da4153afec9f5b122771727850dea85ed364bd3c41d231c4a49e7c687729\"" Jan 24 00:56:41.262729 containerd[1614]: time="2026-01-24T00:56:41.262549065Z" level=info msg="connecting to shim fa35da4153afec9f5b122771727850dea85ed364bd3c41d231c4a49e7c687729" address="unix:///run/containerd/s/335d714dfd8d79ad2ce46d72ca6943036b1771048c247c84067259bdd6a46dc0" protocol=ttrpc version=3 Jan 24 00:56:41.669976 systemd[1]: Started cri-containerd-fa35da4153afec9f5b122771727850dea85ed364bd3c41d231c4a49e7c687729.scope - libcontainer container fa35da4153afec9f5b122771727850dea85ed364bd3c41d231c4a49e7c687729. Jan 24 00:56:41.986000 audit: BPF prog-id=170 op=LOAD Jan 24 00:56:41.986000 audit[4880]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.060599 kernel: audit: type=1334 audit(1769216201.986:576): prog-id=170 op=LOAD Jan 24 00:56:42.060780 kernel: audit: type=1300 audit(1769216201.986:576): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.060973 kernel: audit: type=1327 audit(1769216201.986:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:41.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:42.109017 kernel: audit: type=1334 audit(1769216201.986:577): prog-id=171 op=LOAD Jan 24 00:56:41.986000 audit: BPF prog-id=171 op=LOAD Jan 24 00:56:42.121179 kernel: audit: type=1300 audit(1769216201.986:577): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.986000 audit[4880]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.170598 kernel: audit: type=1327 audit(1769216201.986:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:41.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:41.986000 audit: BPF prog-id=171 op=UNLOAD Jan 24 00:56:42.244681 kernel: audit: type=1334 audit(1769216201.986:578): prog-id=171 op=UNLOAD Jan 24 00:56:41.986000 audit[4880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.296079 kernel: audit: type=1300 audit(1769216201.986:578): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:42.296218 kernel: audit: type=1327 audit(1769216201.986:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:41.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:42.331971 containerd[1614]: time="2026-01-24T00:56:42.330739734Z" level=info msg="StartContainer for \"fa35da4153afec9f5b122771727850dea85ed364bd3c41d231c4a49e7c687729\" returns successfully" Jan 24 00:56:41.986000 audit: BPF prog-id=170 op=UNLOAD Jan 24 00:56:42.367567 kernel: audit: type=1334 audit(1769216201.986:579): prog-id=170 op=UNLOAD Jan 24 00:56:41.986000 audit[4880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:41.986000 audit: BPF prog-id=172 op=LOAD Jan 24 00:56:41.986000 audit[4880]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3473 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:41.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333564613431353361666563396635623132323737313732373835 Jan 24 00:56:42.915452 kubelet[2926]: E0124 00:56:42.907101 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:43.042974 kubelet[2926]: I0124 00:56:43.040100 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6cvqh" podStartSLOduration=4.520414065 podStartE2EDuration="1m29.040077629s" podCreationTimestamp="2026-01-24 00:55:14 +0000 UTC" firstStartedPulling="2026-01-24 00:55:16.400778993 +0000 UTC m=+57.980008726" lastFinishedPulling="2026-01-24 00:56:40.920442507 +0000 UTC m=+142.499672290" observedRunningTime="2026-01-24 00:56:43.02246235 +0000 UTC m=+144.601692083" watchObservedRunningTime="2026-01-24 00:56:43.040077629 +0000 UTC m=+144.619307382" Jan 24 00:56:43.193654 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:56:43.193898 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:56:43.943523 kubelet[2926]: E0124 00:56:43.943198 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:44.464460 kubelet[2926]: I0124 00:56:44.461472 2926 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da81df37-b61c-4bd8-af14-44830145232f-whisker-backend-key-pair\") pod \"da81df37-b61c-4bd8-af14-44830145232f\" (UID: \"da81df37-b61c-4bd8-af14-44830145232f\") " Jan 24 00:56:44.464460 kubelet[2926]: I0124 00:56:44.464068 2926 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da81df37-b61c-4bd8-af14-44830145232f-whisker-ca-bundle\") pod \"da81df37-b61c-4bd8-af14-44830145232f\" (UID: \"da81df37-b61c-4bd8-af14-44830145232f\") " Jan 24 00:56:44.464460 kubelet[2926]: I0124 00:56:44.464104 2926 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk75z\" (UniqueName: \"kubernetes.io/projected/da81df37-b61c-4bd8-af14-44830145232f-kube-api-access-rk75z\") pod \"da81df37-b61c-4bd8-af14-44830145232f\" (UID: \"da81df37-b61c-4bd8-af14-44830145232f\") " Jan 24 00:56:44.470717 kubelet[2926]: I0124 00:56:44.470650 2926 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da81df37-b61c-4bd8-af14-44830145232f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "da81df37-b61c-4bd8-af14-44830145232f" (UID: "da81df37-b61c-4bd8-af14-44830145232f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:56:44.505006 systemd[1]: var-lib-kubelet-pods-da81df37\x2db61c\x2d4bd8\x2daf14\x2d44830145232f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:56:44.513513 kubelet[2926]: I0124 00:56:44.506562 2926 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da81df37-b61c-4bd8-af14-44830145232f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "da81df37-b61c-4bd8-af14-44830145232f" (UID: "da81df37-b61c-4bd8-af14-44830145232f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:56:44.511173 systemd[1708]: Created slice background.slice - User Background Tasks Slice. Jan 24 00:56:44.518504 systemd[1]: var-lib-kubelet-pods-da81df37\x2db61c\x2d4bd8\x2daf14\x2d44830145232f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drk75z.mount: Deactivated successfully. Jan 24 00:56:44.520107 systemd[1708]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 24 00:56:44.524132 kubelet[2926]: I0124 00:56:44.524050 2926 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da81df37-b61c-4bd8-af14-44830145232f-kube-api-access-rk75z" (OuterVolumeSpecName: "kube-api-access-rk75z") pod "da81df37-b61c-4bd8-af14-44830145232f" (UID: "da81df37-b61c-4bd8-af14-44830145232f"). InnerVolumeSpecName "kube-api-access-rk75z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:56:44.565619 kubelet[2926]: I0124 00:56:44.565567 2926 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rk75z\" (UniqueName: \"kubernetes.io/projected/da81df37-b61c-4bd8-af14-44830145232f-kube-api-access-rk75z\") on node \"localhost\" DevicePath \"\"" Jan 24 00:56:44.566074 kubelet[2926]: I0124 00:56:44.566022 2926 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da81df37-b61c-4bd8-af14-44830145232f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:56:44.566074 kubelet[2926]: I0124 00:56:44.566050 2926 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da81df37-b61c-4bd8-af14-44830145232f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:56:44.725053 systemd[1708]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 24 00:56:45.032028 systemd[1]: Removed slice kubepods-besteffort-podda81df37_b61c_4bd8_af14_44830145232f.slice - libcontainer container kubepods-besteffort-podda81df37_b61c_4bd8_af14_44830145232f.slice. Jan 24 00:56:45.599079 systemd[1]: Created slice kubepods-besteffort-pode539ab3d_80aa_4b97_836f_149823e6c41d.slice - libcontainer container kubepods-besteffort-pode539ab3d_80aa_4b97_836f_149823e6c41d.slice. Jan 24 00:56:45.714170 kubelet[2926]: I0124 00:56:45.714102 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e539ab3d-80aa-4b97-836f-149823e6c41d-whisker-backend-key-pair\") pod \"whisker-5fcf8bbd55-mfbxj\" (UID: \"e539ab3d-80aa-4b97-836f-149823e6c41d\") " pod="calico-system/whisker-5fcf8bbd55-mfbxj" Jan 24 00:56:45.715918 kubelet[2926]: I0124 00:56:45.715637 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e539ab3d-80aa-4b97-836f-149823e6c41d-whisker-ca-bundle\") pod \"whisker-5fcf8bbd55-mfbxj\" (UID: \"e539ab3d-80aa-4b97-836f-149823e6c41d\") " pod="calico-system/whisker-5fcf8bbd55-mfbxj" Jan 24 00:56:45.715918 kubelet[2926]: I0124 00:56:45.715821 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9rc\" (UniqueName: \"kubernetes.io/projected/e539ab3d-80aa-4b97-836f-149823e6c41d-kube-api-access-7h9rc\") pod \"whisker-5fcf8bbd55-mfbxj\" (UID: \"e539ab3d-80aa-4b97-836f-149823e6c41d\") " pod="calico-system/whisker-5fcf8bbd55-mfbxj" Jan 24 00:56:45.811439 kubelet[2926]: E0124 00:56:45.811149 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:45.823545 containerd[1614]: time="2026-01-24T00:56:45.822158316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:45.831506 kubelet[2926]: I0124 00:56:45.831470 2926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da81df37-b61c-4bd8-af14-44830145232f" path="/var/lib/kubelet/pods/da81df37-b61c-4bd8-af14-44830145232f/volumes" Jan 24 00:56:46.244668 containerd[1614]: time="2026-01-24T00:56:46.244194526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcf8bbd55-mfbxj,Uid:e539ab3d-80aa-4b97-836f-149823e6c41d,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:47.701100 systemd-networkd[1509]: calid2966e436e1: Link UP Jan 24 00:56:47.718209 systemd-networkd[1509]: calid2966e436e1: Gained carrier Jan 24 00:56:47.820011 containerd[1614]: 2026-01-24 00:56:46.206 [INFO][4989] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:56:47.820011 containerd[1614]: 2026-01-24 00:56:46.367 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0 coredns-668d6bf9bc- kube-system 28e628c9-c589-459d-8af8-92dc28cf7661 1030 0 2026-01-24 00:54:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-cb6gm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2966e436e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-" Jan 24 00:56:47.820011 containerd[1614]: 2026-01-24 00:56:46.370 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" Jan 24 00:56:47.820011 containerd[1614]: 2026-01-24 00:56:47.108 [INFO][5025] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" HandleID="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Workload="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.111 [INFO][5025] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" HandleID="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Workload="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012a820), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-cb6gm", "timestamp":"2026-01-24 00:56:47.10888475 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.111 [INFO][5025] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.111 [INFO][5025] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.113 [INFO][5025] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.175 [INFO][5025] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" host="localhost" Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.237 [INFO][5025] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.285 [INFO][5025] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.303 [INFO][5025] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.319 [INFO][5025] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:47.821213 containerd[1614]: 2026-01-24 00:56:47.326 [INFO][5025] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" host="localhost" Jan 24 00:56:47.824953 containerd[1614]: 2026-01-24 00:56:47.344 [INFO][5025] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a Jan 24 00:56:47.824953 containerd[1614]: 2026-01-24 00:56:47.367 [INFO][5025] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" host="localhost" Jan 24 00:56:47.824953 containerd[1614]: 2026-01-24 00:56:47.401 [INFO][5025] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" host="localhost" Jan 24 00:56:47.824953 containerd[1614]: 2026-01-24 00:56:47.401 [INFO][5025] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" host="localhost" Jan 24 00:56:47.824953 containerd[1614]: 2026-01-24 00:56:47.401 [INFO][5025] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:47.824953 containerd[1614]: 2026-01-24 00:56:47.401 [INFO][5025] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" HandleID="k8s-pod-network.799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Workload="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" Jan 24 00:56:47.825181 containerd[1614]: 2026-01-24 00:56:47.442 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"28e628c9-c589-459d-8af8-92dc28cf7661", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-cb6gm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2966e436e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:47.831023 containerd[1614]: 2026-01-24 00:56:47.449 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" Jan 24 00:56:47.831023 containerd[1614]: 2026-01-24 00:56:47.452 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2966e436e1 ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" Jan 24 00:56:47.831023 containerd[1614]: 2026-01-24 00:56:47.698 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" Jan 24 00:56:47.831971 kubelet[2926]: E0124 00:56:47.826844 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:47.849578 containerd[1614]: 2026-01-24 00:56:47.701 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"28e628c9-c589-459d-8af8-92dc28cf7661", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a", Pod:"coredns-668d6bf9bc-cb6gm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2966e436e1", MAC:"e2:81:fb:17:fe:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:47.849578 containerd[1614]: 2026-01-24 00:56:47.783 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cb6gm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cb6gm-eth0" Jan 24 00:56:47.917479 systemd-networkd[1509]: cali3d760a36ccf: Link UP Jan 24 00:56:47.923005 systemd-networkd[1509]: cali3d760a36ccf: Gained carrier Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:46.520 [INFO][5012] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:46.588 [INFO][5012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0 whisker-5fcf8bbd55- calico-system e539ab3d-80aa-4b97-836f-149823e6c41d 1217 0 2026-01-24 00:56:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fcf8bbd55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5fcf8bbd55-mfbxj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3d760a36ccf [] [] }} ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:46.589 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.109 [INFO][5035] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" HandleID="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Workload="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.119 [INFO][5035] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" HandleID="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Workload="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fda0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5fcf8bbd55-mfbxj", "timestamp":"2026-01-24 00:56:47.109192387 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.119 [INFO][5035] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.402 [INFO][5035] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.402 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.491 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.579 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.658 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.676 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.689 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.689 [INFO][5035] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.713 [INFO][5035] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3 Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.751 [INFO][5035] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.845 [INFO][5035] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.846 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" host="localhost" Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.846 [INFO][5035] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:48.045609 containerd[1614]: 2026-01-24 00:56:47.846 [INFO][5035] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" HandleID="k8s-pod-network.ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Workload="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" Jan 24 00:56:48.052730 containerd[1614]: 2026-01-24 00:56:47.882 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0", GenerateName:"whisker-5fcf8bbd55-", Namespace:"calico-system", SelfLink:"", UID:"e539ab3d-80aa-4b97-836f-149823e6c41d", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcf8bbd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5fcf8bbd55-mfbxj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d760a36ccf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:48.052730 containerd[1614]: 2026-01-24 00:56:47.885 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" Jan 24 00:56:48.052730 containerd[1614]: 2026-01-24 00:56:47.885 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d760a36ccf ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" Jan 24 00:56:48.052730 containerd[1614]: 2026-01-24 00:56:47.918 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" Jan 24 00:56:48.052730 containerd[1614]: 2026-01-24 00:56:47.921 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0", GenerateName:"whisker-5fcf8bbd55-", Namespace:"calico-system", SelfLink:"", UID:"e539ab3d-80aa-4b97-836f-149823e6c41d", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcf8bbd55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3", Pod:"whisker-5fcf8bbd55-mfbxj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d760a36ccf", MAC:"8e:56:ef:53:ac:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:48.052730 containerd[1614]: 2026-01-24 00:56:48.009 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" Namespace="calico-system" Pod="whisker-5fcf8bbd55-mfbxj" WorkloadEndpoint="localhost-k8s-whisker--5fcf8bbd55--mfbxj-eth0" Jan 24 00:56:48.587476 containerd[1614]: time="2026-01-24T00:56:48.585953638Z" level=info msg="connecting to shim 799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a" address="unix:///run/containerd/s/f3689d46d63c7ed9bd104eba52df9f5603a39dc444de54181fd043d0d295ac38" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:48.731178 containerd[1614]: time="2026-01-24T00:56:48.731121267Z" level=info msg="connecting to shim ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3" address="unix:///run/containerd/s/628706e3d6b9e17192c4772c799703061c9ddd9b6762fcb56750c28e261871e3" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:48.833525 containerd[1614]: time="2026-01-24T00:56:48.829062954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:48.849760 systemd-networkd[1509]: calid2966e436e1: Gained IPv6LL Jan 24 00:56:49.304899 systemd[1]: Started cri-containerd-799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a.scope - libcontainer container 799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a. Jan 24 00:56:49.706000 audit: BPF prog-id=173 op=LOAD Jan 24 00:56:49.751753 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 24 00:56:49.751958 kernel: audit: type=1334 audit(1769216209.706:581): prog-id=173 op=LOAD Jan 24 00:56:49.753212 systemd-networkd[1509]: cali3d760a36ccf: Gained IPv6LL Jan 24 00:56:49.742000 audit: BPF prog-id=174 op=LOAD Jan 24 00:56:49.788927 kernel: audit: type=1334 audit(1769216209.742:582): prog-id=174 op=LOAD Jan 24 00:56:49.742000 audit[5189]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.789714 kernel: audit: type=1300 audit(1769216209.742:582): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.789782 kernel: audit: type=1327 audit(1769216209.742:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.823194 kernel: audit: type=1334 audit(1769216209.742:583): prog-id=174 op=UNLOAD Jan 24 00:56:49.823668 kernel: audit: type=1300 audit(1769216209.742:583): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.823738 kernel: audit: type=1327 audit(1769216209.742:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.823785 kernel: audit: type=1334 audit(1769216209.750:584): prog-id=175 op=LOAD Jan 24 00:56:49.823820 kernel: audit: type=1300 audit(1769216209.750:584): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.742000 audit: BPF prog-id=174 op=UNLOAD Jan 24 00:56:49.742000 audit[5189]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.750000 audit: BPF prog-id=175 op=LOAD Jan 24 00:56:49.750000 audit[5189]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:50.116535 containerd[1614]: time="2026-01-24T00:56:50.113775869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:50.132462 containerd[1614]: time="2026-01-24T00:56:50.130657310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:50.248790 kernel: audit: type=1327 audit(1769216209.750:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.750000 audit: BPF prog-id=176 op=LOAD Jan 24 00:56:49.750000 audit[5189]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.750000 audit: BPF prog-id=176 op=UNLOAD Jan 24 00:56:49.750000 audit[5189]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.750000 audit: BPF prog-id=175 op=UNLOAD Jan 24 00:56:49.750000 audit[5189]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:49.751000 audit: BPF prog-id=177 op=LOAD Jan 24 00:56:49.751000 audit[5189]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5168 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:49.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739396365383363636335653563386133396533636634323761653064 Jan 24 00:56:50.309866 systemd[1]: Started cri-containerd-ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3.scope - libcontainer container ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3. Jan 24 00:56:50.402811 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:56:50.834482 containerd[1614]: time="2026-01-24T00:56:50.827710434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:50.874204 containerd[1614]: time="2026-01-24T00:56:50.873884935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:51.003000 audit: BPF prog-id=178 op=LOAD Jan 24 00:56:51.137000 audit: BPF prog-id=179 op=LOAD Jan 24 00:56:51.137000 audit[5220]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5185 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562663638383564323566326533643838333134363862656634356463 Jan 24 00:56:51.137000 audit: BPF prog-id=179 op=UNLOAD Jan 24 00:56:51.137000 audit[5220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562663638383564323566326533643838333134363862656634356463 Jan 24 00:56:51.159000 audit: BPF prog-id=180 op=LOAD Jan 24 00:56:51.159000 audit[5220]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5185 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562663638383564323566326533643838333134363862656634356463 Jan 24 00:56:51.164000 audit: BPF prog-id=181 op=LOAD Jan 24 00:56:51.164000 audit[5220]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5185 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562663638383564323566326533643838333134363862656634356463 Jan 24 00:56:51.168000 audit: BPF prog-id=181 op=UNLOAD Jan 24 00:56:51.168000 audit[5220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562663638383564323566326533643838333134363862656634356463 Jan 24 00:56:51.168000 audit: BPF prog-id=180 op=UNLOAD Jan 24 00:56:51.168000 audit[5220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562663638383564323566326533643838333134363862656634356463 Jan 24 00:56:51.168000 audit: BPF prog-id=182 op=LOAD Jan 24 00:56:51.168000 audit[5220]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5185 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562663638383564323566326533643838333134363862656634356463 Jan 24 00:56:51.243703 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:56:51.303000 audit: BPF prog-id=183 op=LOAD Jan 24 00:56:51.303000 audit[5299]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2849a150 a2=98 a3=1fffffffffffffff items=0 ppid=5088 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.303000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:56:51.304000 audit: BPF prog-id=183 op=UNLOAD Jan 24 00:56:51.304000 audit[5299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff2849a120 a3=0 items=0 ppid=5088 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.304000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:56:51.305000 audit: BPF prog-id=184 op=LOAD Jan 24 00:56:51.305000 audit[5299]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2849a030 a2=94 a3=3 items=0 ppid=5088 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.305000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:56:51.305000 audit: BPF prog-id=184 op=UNLOAD Jan 24 00:56:51.305000 audit[5299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff2849a030 a2=94 a3=3 items=0 ppid=5088 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.305000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:56:51.305000 audit: BPF prog-id=185 op=LOAD Jan 24 00:56:51.305000 audit[5299]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2849a070 a2=94 a3=7fff2849a250 items=0 ppid=5088 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.305000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:56:51.305000 audit: BPF prog-id=185 op=UNLOAD Jan 24 00:56:51.305000 audit[5299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff2849a070 a2=94 a3=7fff2849a250 items=0 ppid=5088 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.305000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:56:51.349000 audit: BPF prog-id=186 op=LOAD Jan 24 00:56:51.349000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0ed0f0f0 a2=98 a3=3 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.349000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:51.350000 audit: BPF prog-id=186 op=UNLOAD Jan 24 00:56:51.350000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd0ed0f0c0 a3=0 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.350000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:51.352000 audit: BPF prog-id=187 op=LOAD Jan 24 00:56:51.352000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd0ed0eee0 a2=94 a3=54428f items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.352000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:51.352000 audit: BPF prog-id=187 op=UNLOAD Jan 24 00:56:51.352000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd0ed0eee0 a2=94 a3=54428f items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.352000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:51.353000 audit: BPF prog-id=188 op=LOAD Jan 24 00:56:51.353000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd0ed0ef10 a2=94 a3=2 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:51.353000 audit: BPF prog-id=188 op=UNLOAD Jan 24 00:56:51.353000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd0ed0ef10 a2=0 a3=2 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:51.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:51.402902 containerd[1614]: time="2026-01-24T00:56:51.397848258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cb6gm,Uid:28e628c9-c589-459d-8af8-92dc28cf7661,Namespace:kube-system,Attempt:0,} returns sandbox id \"799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a\"" Jan 24 00:56:51.424619 kubelet[2926]: E0124 00:56:51.423222 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:51.443204 containerd[1614]: time="2026-01-24T00:56:51.443107490Z" level=info msg="CreateContainer within sandbox \"799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:56:51.637917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2841904854.mount: Deactivated successfully. Jan 24 00:56:51.812475 containerd[1614]: time="2026-01-24T00:56:51.810736862Z" level=info msg="Container d25e21c5f75d23dad6e08e313455010afe747f54ae782e03b6971b5f2239e806: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:56:51.861978 containerd[1614]: time="2026-01-24T00:56:51.861803827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcf8bbd55-mfbxj,Uid:e539ab3d-80aa-4b97-836f-149823e6c41d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebf6885d25f2e3d8831468bef45dc94e93b69232394c77af63fdc7efb84b7ec3\"" Jan 24 00:56:51.998977 containerd[1614]: time="2026-01-24T00:56:51.973243766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:56:52.080727 containerd[1614]: time="2026-01-24T00:56:52.028714007Z" level=info msg="CreateContainer within sandbox \"799ce83ccc5e5c8a39e3cf427ae0d25830761fcf6a76b7dcd7d0f8d2267c839a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d25e21c5f75d23dad6e08e313455010afe747f54ae782e03b6971b5f2239e806\"" Jan 24 00:56:52.091206 containerd[1614]: time="2026-01-24T00:56:52.091164322Z" level=info msg="StartContainer for \"d25e21c5f75d23dad6e08e313455010afe747f54ae782e03b6971b5f2239e806\"" Jan 24 00:56:52.159212 containerd[1614]: time="2026-01-24T00:56:52.159157358Z" level=info msg="connecting to shim d25e21c5f75d23dad6e08e313455010afe747f54ae782e03b6971b5f2239e806" address="unix:///run/containerd/s/f3689d46d63c7ed9bd104eba52df9f5603a39dc444de54181fd043d0d295ac38" protocol=ttrpc version=3 Jan 24 00:56:52.388104 containerd[1614]: time="2026-01-24T00:56:52.387957847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:56:52.393821 containerd[1614]: time="2026-01-24T00:56:52.393770489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:56:52.394076 containerd[1614]: time="2026-01-24T00:56:52.394048830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:56:52.398796 kubelet[2926]: E0124 00:56:52.398092 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:56:52.398889 kubelet[2926]: E0124 00:56:52.398857 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:56:52.399862 kubelet[2926]: E0124 00:56:52.399035 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf0f3ad691e64c2d81d0e0aa71a74bd8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:52.448027 containerd[1614]: time="2026-01-24T00:56:52.446068189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:56:52.664932 systemd[1]: Started cri-containerd-d25e21c5f75d23dad6e08e313455010afe747f54ae782e03b6971b5f2239e806.scope - libcontainer container d25e21c5f75d23dad6e08e313455010afe747f54ae782e03b6971b5f2239e806. Jan 24 00:56:52.778104 containerd[1614]: time="2026-01-24T00:56:52.777097820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:56:52.781949 systemd-networkd[1509]: cali11c4c7bef99: Link UP Jan 24 00:56:52.788836 systemd-networkd[1509]: cali11c4c7bef99: Gained carrier Jan 24 00:56:52.799036 containerd[1614]: time="2026-01-24T00:56:52.798850358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:56:52.808240 containerd[1614]: time="2026-01-24T00:56:52.799059452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:56:52.808668 kubelet[2926]: E0124 00:56:52.801728 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:56:52.808668 kubelet[2926]: E0124 00:56:52.801790 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:56:52.809168 kubelet[2926]: E0124 00:56:52.801934 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:52.809168 kubelet[2926]: E0124 00:56:52.803884 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.123 [INFO][5197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0 calico-kube-controllers-7797c85599- calico-system 1f065d32-b76b-4b45-b859-d08ade23f4a2 1038 0 2026-01-24 00:55:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7797c85599 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7797c85599-jzs5d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali11c4c7bef99 [] [] }} ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.188 [INFO][5197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.623 [INFO][5298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" HandleID="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Workload="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.624 [INFO][5298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" HandleID="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Workload="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125d30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7797c85599-jzs5d", "timestamp":"2026-01-24 00:56:51.623614357 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.625 [INFO][5298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.626 [INFO][5298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.626 [INFO][5298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:51.816 [INFO][5298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.096 [INFO][5298] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.192 [INFO][5298] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.249 [INFO][5298] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.337 [INFO][5298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.346 [INFO][5298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.368 [INFO][5298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.500 [INFO][5298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.711 [INFO][5298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.711 [INFO][5298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" host="localhost" Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.712 [INFO][5298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:52.910825 containerd[1614]: 2026-01-24 00:56:52.715 [INFO][5298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" HandleID="k8s-pod-network.dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Workload="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" Jan 24 00:56:52.930000 audit: BPF prog-id=189 op=LOAD Jan 24 00:56:52.939722 containerd[1614]: 2026-01-24 00:56:52.749 [INFO][5197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0", GenerateName:"calico-kube-controllers-7797c85599-", Namespace:"calico-system", SelfLink:"", UID:"1f065d32-b76b-4b45-b859-d08ade23f4a2", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7797c85599", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7797c85599-jzs5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11c4c7bef99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:52.939722 containerd[1614]: 2026-01-24 00:56:52.750 [INFO][5197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" Jan 24 00:56:52.939722 containerd[1614]: 2026-01-24 00:56:52.750 [INFO][5197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11c4c7bef99 ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" Jan 24 00:56:52.939722 containerd[1614]: 2026-01-24 00:56:52.791 [INFO][5197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" Jan 24 00:56:52.939722 containerd[1614]: 2026-01-24 00:56:52.800 [INFO][5197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0", GenerateName:"calico-kube-controllers-7797c85599-", Namespace:"calico-system", SelfLink:"", UID:"1f065d32-b76b-4b45-b859-d08ade23f4a2", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7797c85599", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b", Pod:"calico-kube-controllers-7797c85599-jzs5d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11c4c7bef99", MAC:"f2:17:0c:94:bb:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:52.939722 containerd[1614]: 2026-01-24 00:56:52.866 [INFO][5197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" Namespace="calico-system" Pod="calico-kube-controllers-7797c85599-jzs5d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797c85599--jzs5d-eth0" Jan 24 00:56:52.961000 audit: BPF prog-id=190 op=LOAD Jan 24 00:56:52.961000 audit[5319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5168 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:52.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356532316335663735643233646164366530386533313334353530 Jan 24 00:56:52.967000 audit: BPF prog-id=190 op=UNLOAD Jan 24 00:56:52.967000 audit[5319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5168 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:52.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356532316335663735643233646164366530386533313334353530 Jan 24 00:56:52.967000 audit: BPF prog-id=191 op=LOAD Jan 24 00:56:52.967000 audit[5319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5168 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:52.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356532316335663735643233646164366530386533313334353530 Jan 24 00:56:52.967000 audit: BPF prog-id=192 op=LOAD Jan 24 00:56:52.967000 audit[5319]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5168 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:52.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356532316335663735643233646164366530386533313334353530 Jan 24 00:56:52.967000 audit: BPF prog-id=192 op=UNLOAD Jan 24 00:56:52.967000 audit[5319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5168 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:52.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356532316335663735643233646164366530386533313334353530 Jan 24 00:56:52.967000 audit: BPF prog-id=191 op=UNLOAD Jan 24 00:56:52.967000 audit[5319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5168 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:52.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356532316335663735643233646164366530386533313334353530 Jan 24 00:56:52.972000 audit: BPF prog-id=193 op=LOAD Jan 24 00:56:52.972000 audit[5319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5168 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:52.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356532316335663735643233646164366530386533313334353530 Jan 24 00:56:53.659721 kubelet[2926]: E0124 00:56:53.637735 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:56:54.156869 systemd-networkd[1509]: cali11c4c7bef99: Gained IPv6LL Jan 24 00:56:54.203210 containerd[1614]: time="2026-01-24T00:56:54.200871199Z" level=info msg="StartContainer for \"d25e21c5f75d23dad6e08e313455010afe747f54ae782e03b6971b5f2239e806\" returns successfully" Jan 24 00:56:54.566897 containerd[1614]: time="2026-01-24T00:56:54.565650600Z" level=info msg="connecting to shim dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b" address="unix:///run/containerd/s/70d4a75db65705cd113d9e742501c96cb415a420547f9e15e2752fbad0a82ad4" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:54.687682 kubelet[2926]: E0124 00:56:54.687008 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:54.813826 kubelet[2926]: E0124 00:56:54.808897 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:54.831871 containerd[1614]: time="2026-01-24T00:56:54.815632880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:55.124000 audit[5456]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:55.170778 kernel: kauditd_printk_skb: 92 callbacks suppressed Jan 24 00:56:55.170966 kernel: audit: type=1325 audit(1769216215.124:617): table=filter:125 family=2 entries=20 op=nft_register_rule pid=5456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:55.124000 audit[5456]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd9bbe300 a2=0 a3=7ffdd9bbe2ec items=0 ppid=3030 pid=5456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.337235 kernel: audit: type=1300 audit(1769216215.124:617): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd9bbe300 a2=0 a3=7ffdd9bbe2ec items=0 ppid=3030 pid=5456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:55.384782 kernel: audit: type=1327 audit(1769216215.124:617): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:55.226000 audit: BPF prog-id=194 op=LOAD Jan 24 00:56:55.417721 kernel: audit: type=1334 audit(1769216215.226:618): prog-id=194 op=LOAD Jan 24 00:56:55.505035 kernel: audit: type=1300 audit(1769216215.226:618): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd0ed0edd0 a2=94 a3=1 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.226000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd0ed0edd0 a2=94 a3=1 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.226000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.552209 kernel: audit: type=1327 audit(1769216215.226:618): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.226000 audit: BPF prog-id=194 op=UNLOAD Jan 24 00:56:55.575674 kernel: audit: type=1334 audit(1769216215.226:619): prog-id=194 op=UNLOAD Jan 24 00:56:55.657572 kernel: audit: type=1300 audit(1769216215.226:619): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd0ed0edd0 a2=94 a3=1 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.226000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd0ed0edd0 a2=94 a3=1 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.705721 kernel: audit: type=1327 audit(1769216215.226:619): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.226000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.716000 kubelet[2926]: E0124 00:56:55.715906 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:55.294000 audit: BPF prog-id=195 op=LOAD Jan 24 00:56:55.743604 kernel: audit: type=1334 audit(1769216215.294:620): prog-id=195 op=LOAD Jan 24 00:56:55.294000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd0ed0edc0 a2=94 a3=4 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.294000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.294000 audit: BPF prog-id=195 op=UNLOAD Jan 24 00:56:55.294000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd0ed0edc0 a2=0 a3=4 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.294000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.295000 audit: BPF prog-id=196 op=LOAD Jan 24 00:56:55.295000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0ed0ec20 a2=94 a3=5 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.295000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.295000 audit: BPF prog-id=196 op=UNLOAD Jan 24 00:56:55.295000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd0ed0ec20 a2=0 a3=5 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.295000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.295000 audit: BPF prog-id=197 op=LOAD Jan 24 00:56:55.295000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd0ed0ee40 a2=94 a3=6 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.295000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.296000 audit: BPF prog-id=197 op=UNLOAD Jan 24 00:56:55.296000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd0ed0ee40 a2=0 a3=6 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.296000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.296000 audit: BPF prog-id=198 op=LOAD Jan 24 00:56:55.296000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd0ed0e5f0 a2=94 a3=88 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.296000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.298000 audit: BPF prog-id=199 op=LOAD Jan 24 00:56:55.298000 audit[5300]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd0ed0e470 a2=94 a3=2 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.298000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.298000 audit: BPF prog-id=199 op=UNLOAD Jan 24 00:56:55.298000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd0ed0e4a0 a2=0 a3=7ffd0ed0e5a0 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.298000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.302000 audit: BPF prog-id=198 op=UNLOAD Jan 24 00:56:55.302000 audit[5300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2f5d4d10 a2=0 a3=3f5d99c0f4bc16e9 items=0 ppid=5088 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.302000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:56:55.428000 audit[5456]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:55.428000 audit[5456]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd9bbe300 a2=0 a3=0 items=0 ppid=3030 pid=5456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.428000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:55.809000 audit: BPF prog-id=200 op=LOAD Jan 24 00:56:55.809000 audit[5474]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe24f90a40 a2=98 a3=1999999999999999 items=0 ppid=5088 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:55.809000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:56:56.004000 audit: BPF prog-id=200 op=UNLOAD Jan 24 00:56:56.004000 audit[5474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe24f90a10 a3=0 items=0 ppid=5088 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.004000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:56:56.004000 audit: BPF prog-id=201 op=LOAD Jan 24 00:56:56.004000 audit[5474]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe24f90920 a2=94 a3=ffff items=0 ppid=5088 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.004000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:56:56.004000 audit: BPF prog-id=201 op=UNLOAD Jan 24 00:56:56.004000 audit[5474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe24f90920 a2=94 a3=ffff items=0 ppid=5088 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.004000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:56:56.004000 audit: BPF prog-id=202 op=LOAD Jan 24 00:56:56.004000 audit[5474]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe24f90960 a2=94 a3=7ffe24f90b40 items=0 ppid=5088 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.004000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:56:56.006000 audit: BPF prog-id=202 op=UNLOAD Jan 24 00:56:56.006000 audit[5474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe24f90960 a2=94 a3=7ffe24f90b40 items=0 ppid=5088 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.006000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:56:56.051065 systemd[1]: Started cri-containerd-dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b.scope - libcontainer container dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b. Jan 24 00:56:56.230000 audit: BPF prog-id=203 op=LOAD Jan 24 00:56:56.246000 audit: BPF prog-id=204 op=LOAD Jan 24 00:56:56.246000 audit[5455]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a2238 a2=98 a3=0 items=0 ppid=5432 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303964393562636466323761336339616639306363323461663030 Jan 24 00:56:56.248000 audit: BPF prog-id=204 op=UNLOAD Jan 24 00:56:56.248000 audit[5455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5432 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303964393562636466323761336339616639306363323461663030 Jan 24 00:56:56.251000 audit: BPF prog-id=205 op=LOAD Jan 24 00:56:56.251000 audit[5455]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a2488 a2=98 a3=0 items=0 ppid=5432 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303964393562636466323761336339616639306363323461663030 Jan 24 00:56:56.251000 audit: BPF prog-id=206 op=LOAD Jan 24 00:56:56.251000 audit[5455]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a2218 a2=98 a3=0 items=0 ppid=5432 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303964393562636466323761336339616639306363323461663030 Jan 24 00:56:56.251000 audit: BPF prog-id=206 op=UNLOAD Jan 24 00:56:56.251000 audit[5455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5432 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303964393562636466323761336339616639306363323461663030 Jan 24 00:56:56.251000 audit: BPF prog-id=205 op=UNLOAD Jan 24 00:56:56.251000 audit[5455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5432 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303964393562636466323761336339616639306363323461663030 Jan 24 00:56:56.251000 audit: BPF prog-id=207 op=LOAD Jan 24 00:56:56.251000 audit[5455]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a26e8 a2=98 a3=0 items=0 ppid=5432 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463303964393562636466323761336339616639306363323461663030 Jan 24 00:56:56.263467 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:56:56.308000 audit[5499]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:56.308000 audit[5499]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd65ed8e50 a2=0 a3=7ffd65ed8e3c items=0 ppid=3030 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:56.319000 audit[5499]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:56.319000 audit[5499]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd65ed8e50 a2=0 a3=0 items=0 ppid=3030 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:56.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:56.626938 systemd-networkd[1509]: calic1d8e40542e: Link UP Jan 24 00:56:56.675169 systemd-networkd[1509]: calic1d8e40542e: Gained carrier Jan 24 00:56:56.740893 kubelet[2926]: E0124 00:56:56.738208 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:56.751544 kubelet[2926]: I0124 00:56:56.751122 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cb6gm" podStartSLOduration=158.751098241 podStartE2EDuration="2m38.751098241s" podCreationTimestamp="2026-01-24 00:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:54.924200178 +0000 UTC m=+156.503429911" watchObservedRunningTime="2026-01-24 00:56:56.751098241 +0000 UTC m=+158.330327974" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:54.031 [INFO][5353] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--5tr2c-eth0 goldmane-666569f655- calico-system c89836fa-dd95-4cb6-925a-be9fc6a96ed3 1029 0 2026-01-24 00:55:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-5tr2c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic1d8e40542e [] [] }} ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:54.043 [INFO][5353] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-eth0" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:55.390 [INFO][5423] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" HandleID="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Workload="localhost-k8s-goldmane--666569f655--5tr2c-eth0" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:55.396 [INFO][5423] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" HandleID="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Workload="localhost-k8s-goldmane--666569f655--5tr2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005187e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-5tr2c", "timestamp":"2026-01-24 00:56:55.389835577 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:55.409 [INFO][5423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:55.409 [INFO][5423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:55.409 [INFO][5423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:55.745 [INFO][5423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.106 [INFO][5423] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.200 [INFO][5423] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.227 [INFO][5423] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.276 [INFO][5423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.276 [INFO][5423] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.301 [INFO][5423] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301 Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.341 [INFO][5423] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.392 [INFO][5423] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.392 [INFO][5423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" host="localhost" Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.392 [INFO][5423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:56.800805 containerd[1614]: 2026-01-24 00:56:56.392 [INFO][5423] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" HandleID="k8s-pod-network.c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Workload="localhost-k8s-goldmane--666569f655--5tr2c-eth0" Jan 24 00:56:56.804874 containerd[1614]: 2026-01-24 00:56:56.455 [INFO][5353] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5tr2c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c89836fa-dd95-4cb6-925a-be9fc6a96ed3", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-5tr2c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1d8e40542e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:56.804874 containerd[1614]: 2026-01-24 00:56:56.455 [INFO][5353] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-eth0" Jan 24 00:56:56.804874 containerd[1614]: 2026-01-24 00:56:56.455 [INFO][5353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1d8e40542e ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-eth0" Jan 24 00:56:56.804874 containerd[1614]: 2026-01-24 00:56:56.677 [INFO][5353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-eth0" Jan 24 00:56:56.804874 containerd[1614]: 2026-01-24 00:56:56.682 [INFO][5353] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5tr2c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c89836fa-dd95-4cb6-925a-be9fc6a96ed3", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301", Pod:"goldmane-666569f655-5tr2c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1d8e40542e", MAC:"76:dc:6b:ab:32:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:56.804874 containerd[1614]: 2026-01-24 00:56:56.761 [INFO][5353] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" Namespace="calico-system" Pod="goldmane-666569f655-5tr2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5tr2c-eth0" Jan 24 00:56:57.173672 containerd[1614]: time="2026-01-24T00:56:57.171806768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797c85599-jzs5d,Uid:1f065d32-b76b-4b45-b859-d08ade23f4a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc09d95bcdf27a3c9af90cc24af001a17df4d9543103071154324269f9bd098b\"" Jan 24 00:56:57.190205 containerd[1614]: time="2026-01-24T00:56:57.190044598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:56:57.257000 audit[5535]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:57.257000 audit[5535]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff17593710 a2=0 a3=7fff175936fc items=0 ppid=3030 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:57.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:57.270868 containerd[1614]: time="2026-01-24T00:56:57.270671648Z" level=info msg="connecting to shim c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301" address="unix:///run/containerd/s/919f46ab1dae4112a21879a6d68d3ca9f17b6b6c8abcf2047dc265861fd7ffb6" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:57.278000 audit[5535]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=5535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:56:57.278000 audit[5535]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff17593710 a2=0 a3=7fff175936fc items=0 ppid=3030 pid=5535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:57.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:56:57.333808 containerd[1614]: time="2026-01-24T00:56:57.331778483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:56:57.357688 containerd[1614]: time="2026-01-24T00:56:57.356725060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:56:57.357853 containerd[1614]: time="2026-01-24T00:56:57.357767904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:56:57.359677 kubelet[2926]: E0124 00:56:57.359549 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:56:57.359677 kubelet[2926]: E0124 00:56:57.359621 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:56:57.360062 kubelet[2926]: E0124 00:56:57.359784 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5cmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:57.361956 kubelet[2926]: E0124 00:56:57.361783 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:56:57.439192 systemd-networkd[1509]: calie2565f038a6: Link UP Jan 24 00:56:57.450171 systemd-networkd[1509]: calie2565f038a6: Gained carrier Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:54.343 [INFO][5351] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--htrd2-eth0 csi-node-driver- calico-system 1351988d-2da1-448e-bfda-fb7490691684 859 0 2026-01-24 00:55:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-htrd2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2565f038a6 [] [] }} ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:54.343 [INFO][5351] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-eth0" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:55.548 [INFO][5436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" HandleID="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Workload="localhost-k8s-csi--node--driver--htrd2-eth0" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:55.549 [INFO][5436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" HandleID="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Workload="localhost-k8s-csi--node--driver--htrd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-htrd2", "timestamp":"2026-01-24 00:56:55.548949961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:55.565 [INFO][5436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.397 [INFO][5436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.397 [INFO][5436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.687 [INFO][5436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.784 [INFO][5436] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.867 [INFO][5436] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.895 [INFO][5436] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.990 [INFO][5436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:56.994 [INFO][5436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:57.021 [INFO][5436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442 Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:57.199 [INFO][5436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:57.290 [INFO][5436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:57.290 [INFO][5436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" host="localhost" Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:57.290 [INFO][5436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:57.600719 containerd[1614]: 2026-01-24 00:56:57.290 [INFO][5436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" HandleID="k8s-pod-network.431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Workload="localhost-k8s-csi--node--driver--htrd2-eth0" Jan 24 00:56:57.609219 containerd[1614]: 2026-01-24 00:56:57.411 [INFO][5351] cni-plugin/k8s.go 418: Populated endpoint ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--htrd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1351988d-2da1-448e-bfda-fb7490691684", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-htrd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2565f038a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:57.609219 containerd[1614]: 2026-01-24 00:56:57.412 [INFO][5351] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-eth0" Jan 24 00:56:57.609219 containerd[1614]: 2026-01-24 00:56:57.412 [INFO][5351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2565f038a6 ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-eth0" Jan 24 00:56:57.609219 containerd[1614]: 2026-01-24 00:56:57.452 [INFO][5351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-eth0" Jan 24 00:56:57.609219 containerd[1614]: 2026-01-24 00:56:57.456 [INFO][5351] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--htrd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1351988d-2da1-448e-bfda-fb7490691684", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442", Pod:"csi-node-driver-htrd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2565f038a6", MAC:"3e:88:d0:89:86:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:57.609219 containerd[1614]: 2026-01-24 00:56:57.556 [INFO][5351] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" Namespace="calico-system" Pod="csi-node-driver-htrd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--htrd2-eth0" Jan 24 00:56:57.727102 systemd[1]: Started cri-containerd-c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301.scope - libcontainer container c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301. Jan 24 00:56:57.830130 kubelet[2926]: E0124 00:56:57.826963 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:57.904983 kubelet[2926]: E0124 00:56:57.904921 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:56:58.054897 systemd-networkd[1509]: cali2fa1e9a8334: Link UP Jan 24 00:56:58.061045 systemd-networkd[1509]: cali2fa1e9a8334: Gained carrier Jan 24 00:56:58.100939 containerd[1614]: time="2026-01-24T00:56:58.100880352Z" level=info msg="connecting to shim 431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442" address="unix:///run/containerd/s/41360f21e8c37ad00aad8c57cfece7fffe2c80e47ce9fc8557eca99cefa781dc" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:55.063 [INFO][5352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0 calico-apiserver-6b956dc89b- calico-apiserver bf242984-56a4-4914-9f0b-44fbe621897e 1039 0 2026-01-24 00:54:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b956dc89b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b956dc89b-8vwmm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2fa1e9a8334 [] [] }} ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:55.128 [INFO][5352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:56.276 [INFO][5468] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" HandleID="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Workload="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:56.281 [INFO][5468] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" HandleID="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Workload="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de2d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b956dc89b-8vwmm", "timestamp":"2026-01-24 00:56:56.276236865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:56.281 [INFO][5468] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.320 [INFO][5468] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.321 [INFO][5468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.406 [INFO][5468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.552 [INFO][5468] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.613 [INFO][5468] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.629 [INFO][5468] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.757 [INFO][5468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.757 [INFO][5468] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.789 [INFO][5468] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314 Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.844 [INFO][5468] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.988 [INFO][5468] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.988 [INFO][5468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" host="localhost" Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.988 [INFO][5468] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:58.151912 containerd[1614]: 2026-01-24 00:56:57.988 [INFO][5468] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" HandleID="k8s-pod-network.e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Workload="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" Jan 24 00:56:58.158863 containerd[1614]: 2026-01-24 00:56:58.020 [INFO][5352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0", GenerateName:"calico-apiserver-6b956dc89b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf242984-56a4-4914-9f0b-44fbe621897e", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b956dc89b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b956dc89b-8vwmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2fa1e9a8334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:58.158863 containerd[1614]: 2026-01-24 00:56:58.020 [INFO][5352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" Jan 24 00:56:58.158863 containerd[1614]: 2026-01-24 00:56:58.020 [INFO][5352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fa1e9a8334 ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" Jan 24 00:56:58.158863 containerd[1614]: 2026-01-24 00:56:58.073 [INFO][5352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" Jan 24 00:56:58.158863 containerd[1614]: 2026-01-24 00:56:58.088 [INFO][5352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0", GenerateName:"calico-apiserver-6b956dc89b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf242984-56a4-4914-9f0b-44fbe621897e", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b956dc89b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314", Pod:"calico-apiserver-6b956dc89b-8vwmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2fa1e9a8334", MAC:"aa:31:43:c0:27:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:58.158863 containerd[1614]: 2026-01-24 00:56:58.121 [INFO][5352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-8vwmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--8vwmm-eth0" Jan 24 00:56:58.212000 audit: BPF prog-id=208 op=LOAD Jan 24 00:56:58.218000 audit: BPF prog-id=209 op=LOAD Jan 24 00:56:58.218000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5537 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:58.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332333166333237616464653135666434383038623262303030356164 Jan 24 00:56:58.218000 audit: BPF prog-id=209 op=UNLOAD Jan 24 00:56:58.218000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:58.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332333166333237616464653135666434383038623262303030356164 Jan 24 00:56:58.229000 audit: BPF prog-id=210 op=LOAD Jan 24 00:56:58.229000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5537 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:58.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332333166333237616464653135666434383038623262303030356164 Jan 24 00:56:58.229000 audit: BPF prog-id=211 op=LOAD Jan 24 00:56:58.229000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5537 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:58.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332333166333237616464653135666434383038623262303030356164 Jan 24 00:56:58.229000 audit: BPF prog-id=211 op=UNLOAD Jan 24 00:56:58.229000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:58.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332333166333237616464653135666434383038623262303030356164 Jan 24 00:56:58.229000 audit: BPF prog-id=210 op=UNLOAD Jan 24 00:56:58.229000 audit[5553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:58.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332333166333237616464653135666434383038623262303030356164 Jan 24 00:56:58.229000 audit: BPF prog-id=212 op=LOAD Jan 24 00:56:58.229000 audit[5553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5537 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:58.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332333166333237616464653135666434383038623262303030356164 Jan 24 00:56:58.257543 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:56:58.310968 systemd-networkd[1509]: calic1d8e40542e: Gained IPv6LL Jan 24 00:56:58.498108 systemd-networkd[1509]: vxlan.calico: Link UP Jan 24 00:56:58.498122 systemd-networkd[1509]: vxlan.calico: Gained carrier Jan 24 00:56:58.758753 systemd[1]: Started cri-containerd-431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442.scope - libcontainer container 431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442. Jan 24 00:56:58.934191 kubelet[2926]: E0124 00:56:58.931158 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:56:58.981508 containerd[1614]: time="2026-01-24T00:56:58.973014986Z" level=info msg="connecting to shim e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314" address="unix:///run/containerd/s/2604b87c59515b6c58ad99bac42ae8e4a6dc09f552c30c8eefd17d4c34c7d74f" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:58.977497 systemd-networkd[1509]: calie2565f038a6: Gained IPv6LL Jan 24 00:56:59.003000 audit: BPF prog-id=213 op=LOAD Jan 24 00:56:59.003000 audit: BPF prog-id=214 op=LOAD Jan 24 00:56:59.003000 audit[5610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5593 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433316439386463376436356365626132323062366539313436363362 Jan 24 00:56:59.005000 audit: BPF prog-id=214 op=UNLOAD Jan 24 00:56:59.005000 audit[5610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5593 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433316439386463376436356365626132323062366539313436363362 Jan 24 00:56:59.006000 audit: BPF prog-id=215 op=LOAD Jan 24 00:56:59.006000 audit[5610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5593 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433316439386463376436356365626132323062366539313436363362 Jan 24 00:56:59.010000 audit: BPF prog-id=216 op=LOAD Jan 24 00:56:59.010000 audit[5610]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5593 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433316439386463376436356365626132323062366539313436363362 Jan 24 00:56:59.010000 audit: BPF prog-id=216 op=UNLOAD Jan 24 00:56:59.010000 audit[5610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5593 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433316439386463376436356365626132323062366539313436363362 Jan 24 00:56:59.012000 audit: BPF prog-id=215 op=UNLOAD Jan 24 00:56:59.012000 audit[5610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5593 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433316439386463376436356365626132323062366539313436363362 Jan 24 00:56:59.012000 audit: BPF prog-id=217 op=LOAD Jan 24 00:56:59.012000 audit[5610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5593 pid=5610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433316439386463376436356365626132323062366539313436363362 Jan 24 00:56:59.054661 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:56:59.124107 systemd-networkd[1509]: cali96509120581: Link UP Jan 24 00:56:59.125101 systemd-networkd[1509]: cali96509120581: Gained carrier Jan 24 00:56:59.221863 containerd[1614]: time="2026-01-24T00:56:59.207754628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5tr2c,Uid:c89836fa-dd95-4cb6-925a-be9fc6a96ed3,Namespace:calico-system,Attempt:0,} returns sandbox id \"c231f327adde15fd4808b2b0005ad9aa7e6672edd6759e5fe7b044d96f7e8301\"" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:55.423 [INFO][5358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0 calico-apiserver-6b956dc89b- calico-apiserver 12fcbf43-6c31-4160-9172-b8eee7f25a4a 1033 0 2026-01-24 00:54:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b956dc89b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b956dc89b-tb4xl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96509120581 [] [] }} ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:55.424 [INFO][5358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:56.659 [INFO][5485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" HandleID="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Workload="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:56.670 [INFO][5485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" HandleID="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Workload="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003935d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b956dc89b-tb4xl", "timestamp":"2026-01-24 00:56:56.659791733 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:56.670 [INFO][5485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:57.989 [INFO][5485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:57.990 [INFO][5485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.093 [INFO][5485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.158 [INFO][5485] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.246 [INFO][5485] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.284 [INFO][5485] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.394 [INFO][5485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.400 [INFO][5485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.488 [INFO][5485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.564 [INFO][5485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.702 [INFO][5485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.703 [INFO][5485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" host="localhost" Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.704 [INFO][5485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:59.248114 containerd[1614]: 2026-01-24 00:56:58.704 [INFO][5485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" HandleID="k8s-pod-network.2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Workload="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" Jan 24 00:56:59.250823 containerd[1614]: 2026-01-24 00:56:58.912 [INFO][5358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0", GenerateName:"calico-apiserver-6b956dc89b-", Namespace:"calico-apiserver", SelfLink:"", UID:"12fcbf43-6c31-4160-9172-b8eee7f25a4a", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b956dc89b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b956dc89b-tb4xl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96509120581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:59.250823 containerd[1614]: 2026-01-24 00:56:58.915 [INFO][5358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" Jan 24 00:56:59.250823 containerd[1614]: 2026-01-24 00:56:58.915 [INFO][5358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96509120581 ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" Jan 24 00:56:59.250823 containerd[1614]: 2026-01-24 00:56:59.109 [INFO][5358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" Jan 24 00:56:59.250823 containerd[1614]: 2026-01-24 00:56:59.125 [INFO][5358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0", GenerateName:"calico-apiserver-6b956dc89b-", Namespace:"calico-apiserver", SelfLink:"", UID:"12fcbf43-6c31-4160-9172-b8eee7f25a4a", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b956dc89b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc", Pod:"calico-apiserver-6b956dc89b-tb4xl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96509120581", MAC:"82:29:06:d5:07:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:59.250823 containerd[1614]: 2026-01-24 00:56:59.199 [INFO][5358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" Namespace="calico-apiserver" Pod="calico-apiserver-6b956dc89b-tb4xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b956dc89b--tb4xl-eth0" Jan 24 00:56:59.257000 audit: BPF prog-id=218 op=LOAD Jan 24 00:56:59.257000 audit[5671]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff49493760 a2=98 a3=0 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.257000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.258000 audit: BPF prog-id=218 op=UNLOAD Jan 24 00:56:59.258000 audit[5671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff49493730 a3=0 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.258000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.259000 audit: BPF prog-id=219 op=LOAD Jan 24 00:56:59.259000 audit[5671]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff49493570 a2=94 a3=54428f items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.259000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.261000 audit: BPF prog-id=219 op=UNLOAD Jan 24 00:56:59.261000 audit[5671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff49493570 a2=94 a3=54428f items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.261000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.261000 audit: BPF prog-id=220 op=LOAD Jan 24 00:56:59.261000 audit[5671]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff494935a0 a2=94 a3=2 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.261000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.261000 audit: BPF prog-id=220 op=UNLOAD Jan 24 00:56:59.261000 audit[5671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff494935a0 a2=0 a3=2 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.261000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.261000 audit: BPF prog-id=221 op=LOAD Jan 24 00:56:59.261000 audit[5671]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff49493350 a2=94 a3=4 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.261000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.261000 audit: BPF prog-id=221 op=UNLOAD Jan 24 00:56:59.261000 audit[5671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff49493350 a2=94 a3=4 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.261000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.261000 audit: BPF prog-id=222 op=LOAD Jan 24 00:56:59.261000 audit[5671]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff49493450 a2=94 a3=7fff494935d0 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.261000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.261000 audit: BPF prog-id=222 op=UNLOAD Jan 24 00:56:59.261000 audit[5671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff49493450 a2=0 a3=7fff494935d0 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.261000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.274000 audit: BPF prog-id=223 op=LOAD Jan 24 00:56:59.274000 audit[5671]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff49492b80 a2=94 a3=2 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.274000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.287000 audit: BPF prog-id=223 op=UNLOAD Jan 24 00:56:59.287000 audit[5671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff49492b80 a2=0 a3=2 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.287000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.287000 audit: BPF prog-id=224 op=LOAD Jan 24 00:56:59.287000 audit[5671]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff49492c80 a2=94 a3=30 items=0 ppid=5088 pid=5671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.287000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:56:59.293195 containerd[1614]: time="2026-01-24T00:56:59.266059784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:56:59.590138 containerd[1614]: time="2026-01-24T00:56:59.589074455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:56:59.620159 containerd[1614]: time="2026-01-24T00:56:59.619056598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:56:59.629560 kubelet[2926]: E0124 00:56:59.628851 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:56:59.630839 kubelet[2926]: E0124 00:56:59.630145 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:56:59.632200 containerd[1614]: time="2026-01-24T00:56:59.626239850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:56:59.634112 kubelet[2926]: E0124 00:56:59.633816 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brzxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:59.635836 kubelet[2926]: E0124 00:56:59.635800 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:56:59.669764 containerd[1614]: time="2026-01-24T00:56:59.666100374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htrd2,Uid:1351988d-2da1-448e-bfda-fb7490691684,Namespace:calico-system,Attempt:0,} returns sandbox id \"431d98dc7d65ceba220b6e914663be705f5bf85085767b9018f182606d8a0442\"" Jan 24 00:56:59.683186 containerd[1614]: time="2026-01-24T00:56:59.683142281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:56:59.704000 audit: BPF prog-id=225 op=LOAD Jan 24 00:56:59.704000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee6e61930 a2=98 a3=0 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.704000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:56:59.704000 audit: BPF prog-id=225 op=UNLOAD Jan 24 00:56:59.704000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffee6e61900 a3=0 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.704000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:56:59.705000 audit: BPF prog-id=226 op=LOAD Jan 24 00:56:59.705000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee6e61720 a2=94 a3=54428f items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:56:59.705000 audit: BPF prog-id=226 op=UNLOAD Jan 24 00:56:59.705000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee6e61720 a2=94 a3=54428f items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:56:59.705000 audit: BPF prog-id=227 op=LOAD Jan 24 00:56:59.705000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee6e61750 a2=94 a3=2 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:56:59.706000 audit: BPF prog-id=227 op=UNLOAD Jan 24 00:56:59.706000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee6e61750 a2=0 a3=2 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:56:59.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:56:59.800810 systemd[1]: Started cri-containerd-e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314.scope - libcontainer container e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314. Jan 24 00:56:59.805678 kubelet[2926]: E0124 00:56:59.805644 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:59.842200 containerd[1614]: time="2026-01-24T00:56:59.841963114Z" level=info msg="connecting to shim 2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc" address="unix:///run/containerd/s/713c88a75989c12ea2960b64a63819d488db64d0c9eb7c71b7213d038fe6de2d" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:56:59.863096 containerd[1614]: time="2026-01-24T00:56:59.863046323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:56:59.883059 containerd[1614]: time="2026-01-24T00:56:59.882997543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:56:59.884529 containerd[1614]: time="2026-01-24T00:56:59.883687220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:56:59.887879 kubelet[2926]: E0124 00:56:59.887679 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:56:59.887879 kubelet[2926]: E0124 00:56:59.887853 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:56:59.888777 kubelet[2926]: E0124 00:56:59.887999 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:59.900902 containerd[1614]: time="2026-01-24T00:56:59.900855598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:56:59.969488 kubelet[2926]: E0124 00:56:59.969018 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:56:59.977670 systemd-networkd[1509]: vxlan.calico: Gained IPv6LL Jan 24 00:57:00.068184 containerd[1614]: time="2026-01-24T00:57:00.066756770Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:00.082201 containerd[1614]: time="2026-01-24T00:57:00.080632259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:00.082673 containerd[1614]: time="2026-01-24T00:57:00.082644055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:00.100000 audit: BPF prog-id=228 op=LOAD Jan 24 00:57:00.103873 kubelet[2926]: E0124 00:57:00.101704 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:00.105882 systemd-networkd[1509]: cali2fa1e9a8334: Gained IPv6LL Jan 24 00:57:00.115612 kubelet[2926]: E0124 00:57:00.115559 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:00.115955 kubelet[2926]: E0124 00:57:00.115892 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:00.127575 kubelet[2926]: E0124 00:57:00.125806 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:57:00.189837 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 24 00:57:00.189982 kernel: audit: type=1334 audit(1769216220.143:685): prog-id=229 op=LOAD Jan 24 00:57:00.143000 audit: BPF prog-id=229 op=LOAD Jan 24 00:57:00.200539 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:00.217587 kernel: audit: type=1300 audit(1769216220.143:685): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.217693 kernel: audit: type=1327 audit(1769216220.143:685): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.217727 kernel: audit: type=1334 audit(1769216220.143:686): prog-id=229 op=UNLOAD Jan 24 00:57:00.217764 kernel: audit: type=1300 audit(1769216220.143:686): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.217792 kernel: audit: type=1327 audit(1769216220.143:686): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.217813 kernel: audit: type=1334 audit(1769216220.154:687): prog-id=230 op=LOAD Jan 24 00:57:00.217838 kernel: audit: type=1300 audit(1769216220.154:687): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.217863 kernel: audit: type=1327 audit(1769216220.154:687): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.217884 kernel: audit: type=1334 audit(1769216220.154:688): prog-id=231 op=LOAD Jan 24 00:57:00.143000 audit[5687]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.143000 audit: BPF prog-id=229 op=UNLOAD Jan 24 00:57:00.143000 audit[5687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.154000 audit: BPF prog-id=230 op=LOAD Jan 24 00:57:00.154000 audit[5687]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.154000 audit: BPF prog-id=231 op=LOAD Jan 24 00:57:00.154000 audit[5687]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.154000 audit: BPF prog-id=231 op=UNLOAD Jan 24 00:57:00.154000 audit[5687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.154000 audit: BPF prog-id=230 op=UNLOAD Jan 24 00:57:00.154000 audit[5687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.154000 audit: BPF prog-id=232 op=LOAD Jan 24 00:57:00.154000 audit[5687]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=5644 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534396164646230386231383130336534336263353065373633343634 Jan 24 00:57:00.257000 audit[5755]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=5755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:00.257000 audit[5755]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd32f77d00 a2=0 a3=7ffd32f77cec items=0 ppid=3030 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:00.294994 systemd-networkd[1509]: cali96509120581: Gained IPv6LL Jan 24 00:57:00.375000 audit[5755]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:00.375000 audit[5755]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd32f77d00 a2=0 a3=7ffd32f77cec items=0 ppid=3030 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:00.830817 systemd[1]: Started cri-containerd-2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc.scope - libcontainer container 2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc. Jan 24 00:57:00.896033 containerd[1614]: time="2026-01-24T00:57:00.895970217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-8vwmm,Uid:bf242984-56a4-4914-9f0b-44fbe621897e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e49addb08b18103e43bc50e7634643360e3e0d252b32440b3476c90851631314\"" Jan 24 00:57:00.907695 containerd[1614]: time="2026-01-24T00:57:00.905572977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:00.964000 audit: BPF prog-id=233 op=LOAD Jan 24 00:57:00.974000 audit: BPF prog-id=234 op=LOAD Jan 24 00:57:00.974000 audit[5744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5725 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303736323166393630376263643732373631633839393330313430 Jan 24 00:57:00.975000 audit: BPF prog-id=234 op=UNLOAD Jan 24 00:57:00.975000 audit[5744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5725 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303736323166393630376263643732373631633839393330313430 Jan 24 00:57:00.976000 audit: BPF prog-id=235 op=LOAD Jan 24 00:57:00.983000 audit: BPF prog-id=236 op=LOAD Jan 24 00:57:00.983000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee6e61610 a2=94 a3=1 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.983000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:00.983000 audit: BPF prog-id=236 op=UNLOAD Jan 24 00:57:00.983000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee6e61610 a2=94 a3=1 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.983000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:00.976000 audit[5744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5725 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303736323166393630376263643732373631633839393330313430 Jan 24 00:57:00.987000 audit: BPF prog-id=237 op=LOAD Jan 24 00:57:00.987000 audit[5744]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5725 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303736323166393630376263643732373631633839393330313430 Jan 24 00:57:00.987000 audit: BPF prog-id=237 op=UNLOAD Jan 24 00:57:00.987000 audit[5744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5725 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303736323166393630376263643732373631633839393330313430 Jan 24 00:57:00.987000 audit: BPF prog-id=235 op=UNLOAD Jan 24 00:57:00.987000 audit[5744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5725 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303736323166393630376263643732373631633839393330313430 Jan 24 00:57:00.987000 audit: BPF prog-id=238 op=LOAD Jan 24 00:57:00.987000 audit[5744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5725 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:00.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303736323166393630376263643732373631633839393330313430 Jan 24 00:57:00.997460 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:01.005000 audit: BPF prog-id=239 op=LOAD Jan 24 00:57:01.005000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee6e61600 a2=94 a3=4 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.005000 audit: BPF prog-id=239 op=UNLOAD Jan 24 00:57:01.005000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffee6e61600 a2=0 a3=4 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.008000 audit: BPF prog-id=240 op=LOAD Jan 24 00:57:01.008000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffee6e61460 a2=94 a3=5 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.008000 audit: BPF prog-id=240 op=UNLOAD Jan 24 00:57:01.008000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffee6e61460 a2=0 a3=5 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.009000 audit: BPF prog-id=241 op=LOAD Jan 24 00:57:01.009000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee6e61680 a2=94 a3=6 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.009000 audit: BPF prog-id=241 op=UNLOAD Jan 24 00:57:01.009000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffee6e61680 a2=0 a3=6 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.009000 audit: BPF prog-id=242 op=LOAD Jan 24 00:57:01.009000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee6e60e30 a2=94 a3=88 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.009000 audit: BPF prog-id=243 op=LOAD Jan 24 00:57:01.009000 audit[5711]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffee6e60cb0 a2=94 a3=2 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.009000 audit: BPF prog-id=243 op=UNLOAD Jan 24 00:57:01.009000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffee6e60ce0 a2=0 a3=7ffee6e60de0 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.009000 audit: BPF prog-id=242 op=UNLOAD Jan 24 00:57:01.009000 audit[5711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=32c02d10 a2=0 a3=6cb651746ff2c762 items=0 ppid=5088 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:57:01.070000 audit: BPF prog-id=224 op=UNLOAD Jan 24 00:57:01.070000 audit[5088]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001016440 a2=0 a3=0 items=0 ppid=5050 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:01.070000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 00:57:01.107465 kubelet[2926]: E0124 00:57:01.104949 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:57:01.117022 kubelet[2926]: E0124 00:57:01.116821 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:57:01.153983 containerd[1614]: time="2026-01-24T00:57:01.141922938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:01.291841 containerd[1614]: time="2026-01-24T00:57:01.276984104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:01.291841 containerd[1614]: time="2026-01-24T00:57:01.291158320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:01.398533 kubelet[2926]: E0124 00:57:01.292657 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:01.398533 kubelet[2926]: E0124 00:57:01.292797 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:01.432606 kubelet[2926]: E0124 00:57:01.423137 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:01.467698 kubelet[2926]: E0124 00:57:01.465666 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:57:02.127705 kubelet[2926]: E0124 00:57:02.123935 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:57:02.287696 containerd[1614]: time="2026-01-24T00:57:02.280600420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b956dc89b-tb4xl,Uid:12fcbf43-6c31-4160-9172-b8eee7f25a4a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2c07621f9607bcd72761c89930140fd66410f7da168f7374e25ffd6a27f13acc\"" Jan 24 00:57:02.287696 containerd[1614]: time="2026-01-24T00:57:02.285814355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:02.337034 systemd-networkd[1509]: cali2aaf81fbdd7: Link UP Jan 24 00:57:02.346710 systemd-networkd[1509]: cali2aaf81fbdd7: Gained carrier Jan 24 00:57:02.409051 containerd[1614]: time="2026-01-24T00:57:02.400639980Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:02.423530 containerd[1614]: time="2026-01-24T00:57:02.420826114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:02.423530 containerd[1614]: time="2026-01-24T00:57:02.421046039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:02.423736 kubelet[2926]: E0124 00:57:02.421665 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:02.423736 kubelet[2926]: E0124 00:57:02.421959 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:02.423736 kubelet[2926]: E0124 00:57:02.422543 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf4h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:02.427585 kubelet[2926]: E0124 00:57:02.425737 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:00.357 [INFO][5680] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0 coredns-668d6bf9bc- kube-system 3894e361-72d8-4a9b-bd6a-f0764e209428 1037 0 2026-01-24 00:54:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5fkkt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2aaf81fbdd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:00.358 [INFO][5680] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:00.955 [INFO][5770] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" HandleID="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Workload="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:00.962 [INFO][5770] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" HandleID="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Workload="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ca040), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5fkkt", "timestamp":"2026-01-24 00:57:00.955768917 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:00.962 [INFO][5770] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:00.962 [INFO][5770] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:00.963 [INFO][5770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.081 [INFO][5770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.288 [INFO][5770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.442 [INFO][5770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.515 [INFO][5770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.560 [INFO][5770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.560 [INFO][5770] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.828 [INFO][5770] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57 Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:01.983 [INFO][5770] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:02.162 [INFO][5770] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:02.173 [INFO][5770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" host="localhost" Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:02.180 [INFO][5770] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:02.523122 containerd[1614]: 2026-01-24 00:57:02.187 [INFO][5770] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" HandleID="k8s-pod-network.bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Workload="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" Jan 24 00:57:02.527174 containerd[1614]: 2026-01-24 00:57:02.286 [INFO][5680] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3894e361-72d8-4a9b-bd6a-f0764e209428", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5fkkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aaf81fbdd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:02.527174 containerd[1614]: 2026-01-24 00:57:02.291 [INFO][5680] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" Jan 24 00:57:02.527174 containerd[1614]: 2026-01-24 00:57:02.293 [INFO][5680] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2aaf81fbdd7 ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" Jan 24 00:57:02.527174 containerd[1614]: 2026-01-24 00:57:02.355 [INFO][5680] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" Jan 24 00:57:02.527174 containerd[1614]: 2026-01-24 00:57:02.384 [INFO][5680] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3894e361-72d8-4a9b-bd6a-f0764e209428", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57", Pod:"coredns-668d6bf9bc-5fkkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aaf81fbdd7", MAC:"a6:aa:f6:11:35:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:02.527174 containerd[1614]: 2026-01-24 00:57:02.454 [INFO][5680] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" Namespace="kube-system" Pod="coredns-668d6bf9bc-5fkkt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5fkkt-eth0" Jan 24 00:57:02.620000 audit[5811]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=5811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:02.620000 audit[5811]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc14f0ab30 a2=0 a3=7ffc14f0ab1c items=0 ppid=3030 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:02.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:02.660000 audit[5811]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:02.660000 audit[5811]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc14f0ab30 a2=0 a3=7ffc14f0ab1c items=0 ppid=3030 pid=5811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:02.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:02.916496 containerd[1614]: time="2026-01-24T00:57:02.913927530Z" level=info msg="connecting to shim bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57" address="unix:///run/containerd/s/168550ba0b2aa55b5b5b7d9e68da90878b02a14d9baae301090ec1dddcdcc2f7" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:57:03.193835 kubelet[2926]: E0124 00:57:03.193669 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:57:03.285849 systemd[1]: Started cri-containerd-bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57.scope - libcontainer container bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57. Jan 24 00:57:03.491000 audit: BPF prog-id=244 op=LOAD Jan 24 00:57:03.497000 audit: BPF prog-id=245 op=LOAD Jan 24 00:57:03.497000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5826 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264366532323264306535326333363031623134323265393934333336 Jan 24 00:57:03.497000 audit: BPF prog-id=245 op=UNLOAD Jan 24 00:57:03.497000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5826 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264366532323264306535326333363031623134323265393934333336 Jan 24 00:57:03.497000 audit: BPF prog-id=246 op=LOAD Jan 24 00:57:03.497000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5826 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264366532323264306535326333363031623134323265393934333336 Jan 24 00:57:03.497000 audit: BPF prog-id=247 op=LOAD Jan 24 00:57:03.497000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5826 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264366532323264306535326333363031623134323265393934333336 Jan 24 00:57:03.497000 audit: BPF prog-id=247 op=UNLOAD Jan 24 00:57:03.497000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5826 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264366532323264306535326333363031623134323265393934333336 Jan 24 00:57:03.497000 audit: BPF prog-id=246 op=UNLOAD Jan 24 00:57:03.497000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5826 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264366532323264306535326333363031623134323265393934333336 Jan 24 00:57:03.497000 audit: BPF prog-id=248 op=LOAD Jan 24 00:57:03.497000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5826 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264366532323264306535326333363031623134323265393934333336 Jan 24 00:57:03.523702 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:03.805000 audit[5863]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:03.805000 audit[5863]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6df28240 a2=0 a3=7ffc6df2822c items=0 ppid=3030 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:03.825050 systemd-networkd[1509]: cali2aaf81fbdd7: Gained IPv6LL Jan 24 00:57:03.857000 audit[5863]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:03.857000 audit[5863]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc6df28240 a2=0 a3=7ffc6df2822c items=0 ppid=3030 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:03.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:03.895964 kubelet[2926]: E0124 00:57:03.862652 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:04.184868 kubelet[2926]: E0124 00:57:04.184821 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:57:04.177000 audit[5868]: NETFILTER_CFG table=mangle:137 family=2 entries=16 op=nft_register_chain pid=5868 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:04.177000 audit[5868]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffddafb8560 a2=0 a3=7ffddafb854c items=0 ppid=5088 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.196000 audit[5870]: NETFILTER_CFG table=nat:138 family=2 entries=15 op=nft_register_chain pid=5870 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:04.196000 audit[5870]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffed3243c20 a2=0 a3=55805b94d000 items=0 ppid=5088 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.196000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:04.344058 containerd[1614]: time="2026-01-24T00:57:04.344012324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5fkkt,Uid:3894e361-72d8-4a9b-bd6a-f0764e209428,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57\"" Jan 24 00:57:04.352526 kubelet[2926]: E0124 00:57:04.352116 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:04.177000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:04.430572 containerd[1614]: time="2026-01-24T00:57:04.429910925Z" level=info msg="CreateContainer within sandbox \"bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:57:04.472000 audit[5869]: NETFILTER_CFG table=raw:139 family=2 entries=21 op=nft_register_chain pid=5869 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:04.472000 audit[5869]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffed491a920 a2=0 a3=7ffed491a90c items=0 ppid=5088 pid=5869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.472000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:04.603636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239231824.mount: Deactivated successfully. Jan 24 00:57:04.609948 containerd[1614]: time="2026-01-24T00:57:04.609898529Z" level=info msg="Container 3cc659e0fc403d6ad4a4613e7a6cfecf6fb7f2520414ef5d09928a71804f8191: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:57:04.683432 containerd[1614]: time="2026-01-24T00:57:04.683054904Z" level=info msg="CreateContainer within sandbox \"bd6e222d0e52c3601b1422e994336f00db3464a703c33e41dad3f88c1a8a5e57\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cc659e0fc403d6ad4a4613e7a6cfecf6fb7f2520414ef5d09928a71804f8191\"" Jan 24 00:57:04.554000 audit[5877]: NETFILTER_CFG table=filter:140 family=2 entries=262 op=nft_register_chain pid=5877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:04.554000 audit[5877]: SYSCALL arch=c000003e syscall=46 success=yes exit=153708 a0=3 a1=7fffc9677460 a2=0 a3=7fffc967744c items=0 ppid=5088 pid=5877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:04.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:04.695767 containerd[1614]: time="2026-01-24T00:57:04.692876387Z" level=info msg="StartContainer for \"3cc659e0fc403d6ad4a4613e7a6cfecf6fb7f2520414ef5d09928a71804f8191\"" Jan 24 00:57:04.730824 containerd[1614]: time="2026-01-24T00:57:04.729614545Z" level=info msg="connecting to shim 3cc659e0fc403d6ad4a4613e7a6cfecf6fb7f2520414ef5d09928a71804f8191" address="unix:///run/containerd/s/168550ba0b2aa55b5b5b7d9e68da90878b02a14d9baae301090ec1dddcdcc2f7" protocol=ttrpc version=3 Jan 24 00:57:05.050000 audit[5893]: NETFILTER_CFG table=filter:141 family=2 entries=87 op=nft_register_chain pid=5893 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:57:05.050000 audit[5893]: SYSCALL arch=c000003e syscall=46 success=yes exit=45748 a0=3 a1=7ffded2b22e0 a2=0 a3=7ffded2b22cc items=0 ppid=5088 pid=5893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.050000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:57:05.173776 systemd[1]: Started cri-containerd-3cc659e0fc403d6ad4a4613e7a6cfecf6fb7f2520414ef5d09928a71804f8191.scope - libcontainer container 3cc659e0fc403d6ad4a4613e7a6cfecf6fb7f2520414ef5d09928a71804f8191. Jan 24 00:57:05.387235 kernel: kauditd_printk_skb: 127 callbacks suppressed Jan 24 00:57:05.394515 kernel: audit: type=1334 audit(1769216225.330:732): prog-id=249 op=LOAD Jan 24 00:57:05.330000 audit: BPF prog-id=249 op=LOAD Jan 24 00:57:05.387000 audit: BPF prog-id=250 op=LOAD Jan 24 00:57:05.387000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.523730 kernel: audit: type=1334 audit(1769216225.387:733): prog-id=250 op=LOAD Jan 24 00:57:05.523888 kernel: audit: type=1300 audit(1769216225.387:733): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.608787 kernel: audit: type=1327 audit(1769216225.387:733): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.608903 kernel: audit: type=1334 audit(1769216225.463:734): prog-id=250 op=UNLOAD Jan 24 00:57:05.463000 audit: BPF prog-id=250 op=UNLOAD Jan 24 00:57:05.463000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.760029 kernel: audit: type=1300 audit(1769216225.463:734): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.760528 kernel: audit: type=1327 audit(1769216225.463:734): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.760591 kernel: audit: type=1334 audit(1769216225.469:735): prog-id=251 op=LOAD Jan 24 00:57:05.469000 audit: BPF prog-id=251 op=LOAD Jan 24 00:57:05.469000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.823582 kernel: audit: type=1300 audit(1769216225.469:735): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.868555 containerd[1614]: time="2026-01-24T00:57:05.868508890Z" level=info msg="StartContainer for \"3cc659e0fc403d6ad4a4613e7a6cfecf6fb7f2520414ef5d09928a71804f8191\" returns successfully" Jan 24 00:57:05.894776 kernel: audit: type=1327 audit(1769216225.469:735): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.469000 audit: BPF prog-id=252 op=LOAD Jan 24 00:57:05.469000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.469000 audit: BPF prog-id=252 op=UNLOAD Jan 24 00:57:05.469000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.469000 audit: BPF prog-id=251 op=UNLOAD Jan 24 00:57:05.469000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:05.469000 audit: BPF prog-id=253 op=LOAD Jan 24 00:57:05.469000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=5826 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:05.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633635396530666334303364366164346134363133653761366366 Jan 24 00:57:06.232971 kubelet[2926]: E0124 00:57:06.232178 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:06.275710 kubelet[2926]: I0124 00:57:06.274959 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5fkkt" podStartSLOduration=168.27493335 podStartE2EDuration="2m48.27493335s" podCreationTimestamp="2026-01-24 00:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:06.272992659 +0000 UTC m=+167.852222392" watchObservedRunningTime="2026-01-24 00:57:06.27493335 +0000 UTC m=+167.854163083" Jan 24 00:57:06.362000 audit[5927]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:06.362000 audit[5927]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb05795b0 a2=0 a3=7ffeb057959c items=0 ppid=3030 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:06.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:06.375000 audit[5927]: NETFILTER_CFG table=nat:143 family=2 entries=44 op=nft_register_rule pid=5927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:06.375000 audit[5927]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeb05795b0 a2=0 a3=7ffeb057959c items=0 ppid=3030 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:06.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:06.419000 audit[5929]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:06.419000 audit[5929]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd07412e10 a2=0 a3=7ffd07412dfc items=0 ppid=3030 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:06.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:06.456000 audit[5929]: NETFILTER_CFG table=nat:145 family=2 entries=56 op=nft_register_chain pid=5929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:57:06.456000 audit[5929]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd07412e10 a2=0 a3=7ffd07412dfc items=0 ppid=3030 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:06.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:57:07.231971 kubelet[2926]: E0124 00:57:07.231925 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:07.812357 containerd[1614]: time="2026-01-24T00:57:07.811569336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:07.895522 containerd[1614]: time="2026-01-24T00:57:07.894523973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:07.904183 containerd[1614]: time="2026-01-24T00:57:07.900705762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:07.904183 containerd[1614]: time="2026-01-24T00:57:07.900793954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:07.904550 kubelet[2926]: E0124 00:57:07.901514 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:07.904550 kubelet[2926]: E0124 00:57:07.901573 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:07.904550 kubelet[2926]: E0124 00:57:07.901708 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf0f3ad691e64c2d81d0e0aa71a74bd8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:07.908963 containerd[1614]: time="2026-01-24T00:57:07.907442904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:07.990379 containerd[1614]: time="2026-01-24T00:57:07.990220889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:07.992934 containerd[1614]: time="2026-01-24T00:57:07.992513680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:07.992934 containerd[1614]: time="2026-01-24T00:57:07.992657374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:07.993823 kubelet[2926]: E0124 00:57:07.993692 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:07.993823 kubelet[2926]: E0124 00:57:07.993787 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:07.994181 kubelet[2926]: E0124 00:57:07.993947 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:07.995457 kubelet[2926]: E0124 00:57:07.995228 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:57:10.812686 containerd[1614]: time="2026-01-24T00:57:10.810873449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:10.885401 containerd[1614]: time="2026-01-24T00:57:10.884131301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:10.903415 containerd[1614]: time="2026-01-24T00:57:10.901238514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:10.903415 containerd[1614]: time="2026-01-24T00:57:10.901523619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:10.903663 kubelet[2926]: E0124 00:57:10.901690 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:10.903663 kubelet[2926]: E0124 00:57:10.901745 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:10.903663 kubelet[2926]: E0124 00:57:10.901886 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5cmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:10.904782 kubelet[2926]: E0124 00:57:10.904723 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:57:12.821106 containerd[1614]: time="2026-01-24T00:57:12.819472800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:12.936892 containerd[1614]: time="2026-01-24T00:57:12.936750909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:12.940304 containerd[1614]: time="2026-01-24T00:57:12.939421471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:12.940304 containerd[1614]: time="2026-01-24T00:57:12.939655393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:12.940460 kubelet[2926]: E0124 00:57:12.939914 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:12.940460 kubelet[2926]: E0124 00:57:12.940041 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:12.940460 kubelet[2926]: E0124 00:57:12.940160 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:12.945522 containerd[1614]: time="2026-01-24T00:57:12.945430853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:13.022618 containerd[1614]: time="2026-01-24T00:57:13.022494181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:13.025403 containerd[1614]: time="2026-01-24T00:57:13.025163442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:13.025403 containerd[1614]: time="2026-01-24T00:57:13.025361767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:13.025703 kubelet[2926]: E0124 00:57:13.025588 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:13.025703 kubelet[2926]: E0124 00:57:13.025647 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:13.026113 kubelet[2926]: E0124 00:57:13.025786 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:13.027795 kubelet[2926]: E0124 00:57:13.027689 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:57:13.817589 kubelet[2926]: E0124 00:57:13.816409 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:14.913694 kubelet[2926]: E0124 00:57:14.913464 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:15.814346 containerd[1614]: time="2026-01-24T00:57:15.813562293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:15.912545 containerd[1614]: time="2026-01-24T00:57:15.910458873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:15.917975 containerd[1614]: time="2026-01-24T00:57:15.917482909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:15.917975 containerd[1614]: time="2026-01-24T00:57:15.917695060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:15.921240 kubelet[2926]: E0124 00:57:15.918744 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:15.921240 kubelet[2926]: E0124 00:57:15.919227 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:15.921240 kubelet[2926]: E0124 00:57:15.919555 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brzxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:15.923757 kubelet[2926]: E0124 00:57:15.922614 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:57:16.809032 kubelet[2926]: E0124 00:57:16.807395 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:16.812434 containerd[1614]: time="2026-01-24T00:57:16.812393659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:16.898615 containerd[1614]: time="2026-01-24T00:57:16.898383932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:16.907477 containerd[1614]: time="2026-01-24T00:57:16.905582455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:16.907477 containerd[1614]: time="2026-01-24T00:57:16.906021060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:16.907720 kubelet[2926]: E0124 00:57:16.907161 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:16.907720 kubelet[2926]: E0124 00:57:16.907214 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:16.907720 kubelet[2926]: E0124 00:57:16.907513 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:16.910033 kubelet[2926]: E0124 00:57:16.909427 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:57:17.810129 containerd[1614]: time="2026-01-24T00:57:17.809642144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:17.886491 containerd[1614]: time="2026-01-24T00:57:17.884396523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:17.894735 containerd[1614]: time="2026-01-24T00:57:17.892984741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:17.894735 containerd[1614]: time="2026-01-24T00:57:17.893957311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:17.906779 kubelet[2926]: E0124 00:57:17.896769 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:17.906779 kubelet[2926]: E0124 00:57:17.902738 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:17.910824 kubelet[2926]: E0124 00:57:17.908441 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf4h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:17.910824 kubelet[2926]: E0124 00:57:17.910190 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:57:21.820410 kubelet[2926]: E0124 00:57:21.820143 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:57:24.821119 kubelet[2926]: E0124 00:57:24.817155 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:57:25.839694 kubelet[2926]: E0124 00:57:25.839030 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:57:28.058446 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 24 00:57:28.058672 kernel: audit: type=1130 audit(1769216248.023:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.105:22-10.0.0.1:39392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:28.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.105:22-10.0.0.1:39392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:28.024121 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:39392.service - OpenSSH per-connection server daemon (10.0.0.1:39392). Jan 24 00:57:28.668000 audit[5992]: USER_ACCT pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:28.671633 sshd[5992]: Accepted publickey for core from 10.0.0.1 port 39392 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:57:28.677894 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:28.714188 kernel: audit: type=1101 audit(1769216248.668:745): pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:28.671000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:28.736014 systemd-logind[1597]: New session 11 of user core. Jan 24 00:57:28.791073 kernel: audit: type=1103 audit(1769216248.671:746): pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:28.791228 kernel: audit: type=1006 audit(1769216248.672:747): pid=5992 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 24 00:57:28.816794 kubelet[2926]: E0124 00:57:28.815681 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:57:28.827445 kernel: audit: type=1300 audit(1769216248.672:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc270f5840 a2=3 a3=0 items=0 ppid=1 pid=5992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:28.672000 audit[5992]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc270f5840 a2=3 a3=0 items=0 ppid=1 pid=5992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:28.672000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:28.904431 kernel: audit: type=1327 audit(1769216248.672:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:28.914243 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:57:28.928000 audit[5992]: USER_START pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:28.983569 kernel: audit: type=1105 audit(1769216248.928:748): pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:28.940000 audit[5996]: CRED_ACQ pid=5996 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:29.052374 kernel: audit: type=1103 audit(1769216248.940:749): pid=5996 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:29.609475 sshd[5996]: Connection closed by 10.0.0.1 port 39392 Jan 24 00:57:29.609148 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:29.610000 audit[5992]: USER_END pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:29.628189 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:39392.service: Deactivated successfully. Jan 24 00:57:29.651121 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:57:29.672431 kernel: audit: type=1106 audit(1769216249.610:750): pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:29.671824 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:57:29.674590 systemd-logind[1597]: Removed session 11. Jan 24 00:57:29.616000 audit[5992]: CRED_DISP pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:29.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.105:22-10.0.0.1:39392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:29.707863 kernel: audit: type=1104 audit(1769216249.616:751): pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:29.874012 kubelet[2926]: E0124 00:57:29.871605 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:57:30.812390 kubelet[2926]: E0124 00:57:30.811896 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:57:34.673812 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:41598.service - OpenSSH per-connection server daemon (10.0.0.1:41598). Jan 24 00:57:34.746571 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:57:34.746803 kernel: audit: type=1130 audit(1769216254.673:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.105:22-10.0.0.1:41598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:34.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.105:22-10.0.0.1:41598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:35.055000 audit[6019]: USER_ACCT pid=6019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.066858 sshd-session[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:35.070150 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 41598 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:57:35.097863 systemd-logind[1597]: New session 12 of user core. Jan 24 00:57:35.100925 kernel: audit: type=1101 audit(1769216255.055:754): pid=6019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.061000 audit[6019]: CRED_ACQ pid=6019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.169791 kernel: audit: type=1103 audit(1769216255.061:755): pid=6019 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.169930 kernel: audit: type=1006 audit(1769216255.062:756): pid=6019 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 24 00:57:35.062000 audit[6019]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc18b161f0 a2=3 a3=0 items=0 ppid=1 pid=6019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.062000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:35.279228 kernel: audit: type=1300 audit(1769216255.062:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc18b161f0 a2=3 a3=0 items=0 ppid=1 pid=6019 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:35.281081 kernel: audit: type=1327 audit(1769216255.062:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:35.282759 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:57:35.310000 audit[6019]: USER_START pid=6019 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.317000 audit[6023]: CRED_ACQ pid=6023 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.490763 kernel: audit: type=1105 audit(1769216255.310:757): pid=6019 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.490875 kernel: audit: type=1103 audit(1769216255.317:758): pid=6023 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.825177 containerd[1614]: time="2026-01-24T00:57:35.820928593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:35.955173 sshd[6023]: Connection closed by 10.0.0.1 port 41598 Jan 24 00:57:35.950240 sshd-session[6019]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:35.953000 audit[6019]: USER_END pid=6019 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.974522 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:41598.service: Deactivated successfully. Jan 24 00:57:35.989911 containerd[1614]: time="2026-01-24T00:57:35.976140706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:36.017895 kernel: audit: type=1106 audit(1769216255.953:759): pid=6019 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:36.018019 kernel: audit: type=1104 audit(1769216255.954:760): pid=6019 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:35.954000 audit[6019]: CRED_DISP pid=6019 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:36.007072 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:57:35.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.105:22-10.0.0.1:41598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:36.024442 containerd[1614]: time="2026-01-24T00:57:36.022141074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:36.053835 containerd[1614]: time="2026-01-24T00:57:36.025565746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:36.053835 containerd[1614]: time="2026-01-24T00:57:36.031172551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:36.053979 kubelet[2926]: E0124 00:57:36.027929 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:36.053979 kubelet[2926]: E0124 00:57:36.027984 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:36.053979 kubelet[2926]: E0124 00:57:36.028112 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf0f3ad691e64c2d81d0e0aa71a74bd8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:36.025542 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:57:36.052014 systemd-logind[1597]: Removed session 12. Jan 24 00:57:36.171045 containerd[1614]: time="2026-01-24T00:57:36.164815800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:36.176875 containerd[1614]: time="2026-01-24T00:57:36.176691567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:36.176875 containerd[1614]: time="2026-01-24T00:57:36.176820155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:36.180149 kubelet[2926]: E0124 00:57:36.180001 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:36.180149 kubelet[2926]: E0124 00:57:36.180085 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:36.180460 kubelet[2926]: E0124 00:57:36.180225 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:36.184932 kubelet[2926]: E0124 00:57:36.182505 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:57:38.877018 containerd[1614]: time="2026-01-24T00:57:38.876505121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:38.991909 containerd[1614]: time="2026-01-24T00:57:38.991509596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:38.999123 containerd[1614]: time="2026-01-24T00:57:38.999059562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:39.005413 containerd[1614]: time="2026-01-24T00:57:39.004699607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:39.006195 kubelet[2926]: E0124 00:57:39.005944 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:39.006999 kubelet[2926]: E0124 00:57:39.006189 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:39.032406 kubelet[2926]: E0124 00:57:39.032035 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5cmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:39.038497 kubelet[2926]: E0124 00:57:39.038152 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:57:39.820438 containerd[1614]: time="2026-01-24T00:57:39.819708170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:39.960426 containerd[1614]: time="2026-01-24T00:57:39.960202637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:39.969498 containerd[1614]: time="2026-01-24T00:57:39.967821766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:39.969498 containerd[1614]: time="2026-01-24T00:57:39.968515521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:39.972018 kubelet[2926]: E0124 00:57:39.970536 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:39.972018 kubelet[2926]: E0124 00:57:39.970676 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:39.972018 kubelet[2926]: E0124 00:57:39.970812 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:39.980081 containerd[1614]: time="2026-01-24T00:57:39.980044687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:40.062752 containerd[1614]: time="2026-01-24T00:57:40.061042460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:40.068759 containerd[1614]: time="2026-01-24T00:57:40.065659647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:40.068759 containerd[1614]: time="2026-01-24T00:57:40.065768719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:40.068906 kubelet[2926]: E0124 00:57:40.066159 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:40.068906 kubelet[2926]: E0124 00:57:40.066216 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:40.068906 kubelet[2926]: E0124 00:57:40.066875 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:40.070056 kubelet[2926]: E0124 00:57:40.069188 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:57:40.969519 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:41606.service - OpenSSH per-connection server daemon (10.0.0.1:41606). Jan 24 00:57:40.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.105:22-10.0.0.1:41606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:40.978501 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:57:40.978645 kernel: audit: type=1130 audit(1769216260.969:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.105:22-10.0.0.1:41606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:41.308532 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 41606 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:57:41.305000 audit[6043]: USER_ACCT pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.312792 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:41.348445 kernel: audit: type=1101 audit(1769216261.305:763): pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.308000 audit[6043]: CRED_ACQ pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.380420 systemd-logind[1597]: New session 13 of user core. Jan 24 00:57:41.418726 kernel: audit: type=1103 audit(1769216261.308:764): pid=6043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.418896 kernel: audit: type=1006 audit(1769216261.308:765): pid=6043 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 24 00:57:41.308000 audit[6043]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2defd4a0 a2=3 a3=0 items=0 ppid=1 pid=6043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:41.308000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:41.513709 kernel: audit: type=1300 audit(1769216261.308:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2defd4a0 a2=3 a3=0 items=0 ppid=1 pid=6043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:41.516662 kernel: audit: type=1327 audit(1769216261.308:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:41.512645 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:57:41.521000 audit[6043]: USER_START pid=6043 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.572713 kernel: audit: type=1105 audit(1769216261.521:766): pid=6043 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.572939 kernel: audit: type=1103 audit(1769216261.531:767): pid=6047 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.531000 audit[6047]: CRED_ACQ pid=6047 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:41.829941 containerd[1614]: time="2026-01-24T00:57:41.826725730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:42.021442 containerd[1614]: time="2026-01-24T00:57:42.021238672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:42.076527 containerd[1614]: time="2026-01-24T00:57:42.075839049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:42.076527 containerd[1614]: time="2026-01-24T00:57:42.076025434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:42.080522 kubelet[2926]: E0124 00:57:42.078149 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:42.086696 kubelet[2926]: E0124 00:57:42.080976 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:42.086696 kubelet[2926]: E0124 00:57:42.085966 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brzxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:42.092385 kubelet[2926]: E0124 00:57:42.090107 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:57:42.215889 sshd[6047]: Connection closed by 10.0.0.1 port 41606 Jan 24 00:57:42.217632 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:42.228000 audit[6043]: USER_END pid=6043 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:42.244874 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:41606.service: Deactivated successfully. Jan 24 00:57:42.252019 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:57:42.258149 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:57:42.264733 systemd-logind[1597]: Removed session 13. Jan 24 00:57:42.292424 kernel: audit: type=1106 audit(1769216262.228:768): pid=6043 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:42.292682 kernel: audit: type=1104 audit(1769216262.229:769): pid=6043 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:42.229000 audit[6043]: CRED_DISP pid=6043 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:42.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.105:22-10.0.0.1:41606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:42.815675 containerd[1614]: time="2026-01-24T00:57:42.814100611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:42.915962 containerd[1614]: time="2026-01-24T00:57:42.914895901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:42.928359 containerd[1614]: time="2026-01-24T00:57:42.926030903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:42.928359 containerd[1614]: time="2026-01-24T00:57:42.926197903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:42.928618 kubelet[2926]: E0124 00:57:42.926735 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:42.928618 kubelet[2926]: E0124 00:57:42.926797 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:42.928618 kubelet[2926]: E0124 00:57:42.926953 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf4h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:42.931173 kubelet[2926]: E0124 00:57:42.930710 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:57:45.828771 containerd[1614]: time="2026-01-24T00:57:45.828640977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:45.977441 containerd[1614]: time="2026-01-24T00:57:45.976399288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:57:45.987647 containerd[1614]: time="2026-01-24T00:57:45.985189711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:45.987647 containerd[1614]: time="2026-01-24T00:57:45.985512146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:57:45.993220 kubelet[2926]: E0124 00:57:45.986024 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:45.993220 kubelet[2926]: E0124 00:57:45.986087 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:45.993220 kubelet[2926]: E0124 00:57:45.986229 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:45.993220 kubelet[2926]: E0124 00:57:45.990956 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:57:47.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.105:22-10.0.0.1:59912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:47.297742 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:59912.service - OpenSSH per-connection server daemon (10.0.0.1:59912). Jan 24 00:57:47.306028 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:57:47.306112 kernel: audit: type=1130 audit(1769216267.296:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.105:22-10.0.0.1:59912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:47.586000 audit[6093]: USER_ACCT pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:47.603835 sshd-session[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:47.612696 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 59912 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:57:47.598000 audit[6093]: CRED_ACQ pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:47.693882 systemd-logind[1597]: New session 14 of user core. Jan 24 00:57:47.711955 kernel: audit: type=1101 audit(1769216267.586:772): pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:47.712090 kernel: audit: type=1103 audit(1769216267.598:773): pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:47.791696 kernel: audit: type=1006 audit(1769216267.599:774): pid=6093 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 24 00:57:47.791828 kernel: audit: type=1300 audit(1769216267.599:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4c4b8f40 a2=3 a3=0 items=0 ppid=1 pid=6093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:47.599000 audit[6093]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4c4b8f40 a2=3 a3=0 items=0 ppid=1 pid=6093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:47.599000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:47.794897 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:57:47.806629 kernel: audit: type=1327 audit(1769216267.599:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:47.807000 audit[6093]: USER_START pid=6093 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:47.856424 kernel: audit: type=1105 audit(1769216267.807:775): pid=6093 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:47.874000 audit[6097]: CRED_ACQ pid=6097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:47.923459 kernel: audit: type=1103 audit(1769216267.874:776): pid=6097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:48.449217 sshd[6097]: Connection closed by 10.0.0.1 port 59912 Jan 24 00:57:48.445196 sshd-session[6093]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:48.457000 audit[6093]: USER_END pid=6093 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:48.530819 kernel: audit: type=1106 audit(1769216268.457:777): pid=6093 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:48.457000 audit[6093]: CRED_DISP pid=6093 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:48.547836 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:59912.service: Deactivated successfully. Jan 24 00:57:48.562066 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:57:48.572204 kernel: audit: type=1104 audit(1769216268.457:778): pid=6093 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:48.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.105:22-10.0.0.1:59912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:48.571152 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:57:48.583482 systemd-logind[1597]: Removed session 14. Jan 24 00:57:49.830825 kubelet[2926]: E0124 00:57:49.828992 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:57:50.812395 kubelet[2926]: E0124 00:57:50.812072 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:57:51.827016 kubelet[2926]: E0124 00:57:51.826853 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:57:52.824651 kubelet[2926]: E0124 00:57:52.822436 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:57:53.498647 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:37526.service - OpenSSH per-connection server daemon (10.0.0.1:37526). Jan 24 00:57:53.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.105:22-10.0.0.1:37526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:53.508926 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:57:53.509069 kernel: audit: type=1130 audit(1769216273.497:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.105:22-10.0.0.1:37526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:53.786070 sshd[6114]: Accepted publickey for core from 10.0.0.1 port 37526 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:57:53.781000 audit[6114]: USER_ACCT pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:53.806078 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:53.791000 audit[6114]: CRED_ACQ pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:53.872009 kernel: audit: type=1101 audit(1769216273.781:781): pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:53.876708 kernel: audit: type=1103 audit(1769216273.791:782): pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:53.876772 kernel: audit: type=1006 audit(1769216273.791:783): pid=6114 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 24 00:57:53.869836 systemd-logind[1597]: New session 15 of user core. Jan 24 00:57:53.894792 kernel: audit: type=1300 audit(1769216273.791:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd721d7b0 a2=3 a3=0 items=0 ppid=1 pid=6114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:53.791000 audit[6114]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd721d7b0 a2=3 a3=0 items=0 ppid=1 pid=6114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:57:53.791000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:53.945697 kernel: audit: type=1327 audit(1769216273.791:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:53.953605 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:57:53.993000 audit[6114]: USER_START pid=6114 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:54.013000 audit[6118]: CRED_ACQ pid=6118 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:54.056822 kernel: audit: type=1105 audit(1769216273.993:784): pid=6114 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:54.056927 kernel: audit: type=1103 audit(1769216274.013:785): pid=6118 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:54.734921 sshd[6118]: Connection closed by 10.0.0.1 port 37526 Jan 24 00:57:54.757761 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:54.760000 audit[6114]: USER_END pid=6114 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:54.773696 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:37526.service: Deactivated successfully. Jan 24 00:57:54.789186 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:57:54.797649 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:57:54.807707 systemd-logind[1597]: Removed session 15. Jan 24 00:57:54.825579 kernel: audit: type=1106 audit(1769216274.760:786): pid=6114 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:54.825800 kubelet[2926]: E0124 00:57:54.825007 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:57:54.760000 audit[6114]: CRED_DISP pid=6114 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:54.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.105:22-10.0.0.1:37526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:54.898571 kernel: audit: type=1104 audit(1769216274.760:787): pid=6114 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:59.764718 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:37542.service - OpenSSH per-connection server daemon (10.0.0.1:37542). Jan 24 00:57:59.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.105:22-10.0.0.1:37542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:59.778519 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:57:59.778641 kernel: audit: type=1130 audit(1769216279.765:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.105:22-10.0.0.1:37542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:57:59.817635 kubelet[2926]: E0124 00:57:59.817150 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:57:59.962000 audit[6134]: USER_ACCT pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:59.971206 sshd-session[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:59.985195 sshd[6134]: Accepted publickey for core from 10.0.0.1 port 37542 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:00.001883 kernel: audit: type=1101 audit(1769216279.962:790): pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:57:59.967000 audit[6134]: CRED_ACQ pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.011243 systemd-logind[1597]: New session 16 of user core. Jan 24 00:58:00.049676 kernel: audit: type=1103 audit(1769216279.967:791): pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.049805 kernel: audit: type=1006 audit(1769216279.967:792): pid=6134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 24 00:57:59.967000 audit[6134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3b1f30c0 a2=3 a3=0 items=0 ppid=1 pid=6134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:00.114191 kernel: audit: type=1300 audit(1769216279.967:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3b1f30c0 a2=3 a3=0 items=0 ppid=1 pid=6134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:00.114538 kernel: audit: type=1327 audit(1769216279.967:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:57:59.967000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:00.111755 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:58:00.122000 audit[6134]: USER_START pid=6134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.174859 kernel: audit: type=1105 audit(1769216280.122:793): pid=6134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.131000 audit[6138]: CRED_ACQ pid=6138 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.203536 kernel: audit: type=1103 audit(1769216280.131:794): pid=6138 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.491170 sshd[6138]: Connection closed by 10.0.0.1 port 37542 Jan 24 00:58:00.498900 sshd-session[6134]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:00.504000 audit[6134]: USER_END pid=6134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.511718 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:58:00.515369 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:37542.service: Deactivated successfully. Jan 24 00:58:00.530774 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:58:00.542944 kernel: audit: type=1106 audit(1769216280.504:795): pid=6134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.504000 audit[6134]: CRED_DISP pid=6134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.546539 systemd-logind[1597]: Removed session 16. Jan 24 00:58:00.565906 kernel: audit: type=1104 audit(1769216280.504:796): pid=6134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:00.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.105:22-10.0.0.1:37542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:01.862037 kubelet[2926]: E0124 00:58:01.861980 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:58:04.816808 kubelet[2926]: E0124 00:58:04.814931 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:58:04.825520 kubelet[2926]: E0124 00:58:04.824979 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:58:04.828976 kubelet[2926]: E0124 00:58:04.828655 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:58:05.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.105:22-10.0.0.1:58922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:05.560386 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:58922.service - OpenSSH per-connection server daemon (10.0.0.1:58922). Jan 24 00:58:05.571860 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:05.571922 kernel: audit: type=1130 audit(1769216285.559:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.105:22-10.0.0.1:58922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:05.839025 kubelet[2926]: E0124 00:58:05.833849 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:05.902000 audit[6152]: USER_ACCT pid=6152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:05.912820 sshd[6152]: Accepted publickey for core from 10.0.0.1 port 58922 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:05.916834 sshd-session[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:05.956507 kernel: audit: type=1101 audit(1769216285.902:799): pid=6152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:05.956633 kernel: audit: type=1103 audit(1769216285.907:800): pid=6152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:05.907000 audit[6152]: CRED_ACQ pid=6152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:05.965783 systemd-logind[1597]: New session 17 of user core. Jan 24 00:58:06.029644 kernel: audit: type=1006 audit(1769216285.907:801): pid=6152 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 24 00:58:06.029794 kernel: audit: type=1300 audit(1769216285.907:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5400a780 a2=3 a3=0 items=0 ppid=1 pid=6152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:05.907000 audit[6152]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5400a780 a2=3 a3=0 items=0 ppid=1 pid=6152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:05.907000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:06.106678 kernel: audit: type=1327 audit(1769216285.907:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:06.118974 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:58:06.142000 audit[6152]: USER_START pid=6152 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:06.209567 kernel: audit: type=1105 audit(1769216286.142:802): pid=6152 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:06.162000 audit[6156]: CRED_ACQ pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:06.257569 kernel: audit: type=1103 audit(1769216286.162:803): pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:06.756670 sshd[6156]: Connection closed by 10.0.0.1 port 58922 Jan 24 00:58:06.763177 sshd-session[6152]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:06.766000 audit[6152]: USER_END pid=6152 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:06.800688 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:58:06.803652 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:58922.service: Deactivated successfully. Jan 24 00:58:06.814111 kubelet[2926]: E0124 00:58:06.813172 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:06.820636 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:58:06.861201 kernel: audit: type=1106 audit(1769216286.766:804): pid=6152 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:06.767000 audit[6152]: CRED_DISP pid=6152 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:06.873950 systemd-logind[1597]: Removed session 17. Jan 24 00:58:06.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.105:22-10.0.0.1:58922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:06.950846 kernel: audit: type=1104 audit(1769216286.767:805): pid=6152 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:08.851798 kubelet[2926]: E0124 00:58:08.832806 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:58:11.851813 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:11.851989 kernel: audit: type=1130 audit(1769216291.809:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.105:22-10.0.0.1:58928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:11.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.105:22-10.0.0.1:58928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:11.809135 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:58928.service - OpenSSH per-connection server daemon (10.0.0.1:58928). Jan 24 00:58:12.084125 sshd[6171]: Accepted publickey for core from 10.0.0.1 port 58928 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:12.083000 audit[6171]: USER_ACCT pid=6171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.091790 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:12.149014 systemd-logind[1597]: New session 18 of user core. Jan 24 00:58:12.165708 kernel: audit: type=1101 audit(1769216292.083:808): pid=6171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.165812 kernel: audit: type=1103 audit(1769216292.087:809): pid=6171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.087000 audit[6171]: CRED_ACQ pid=6171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.260608 kernel: audit: type=1006 audit(1769216292.087:810): pid=6171 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 24 00:58:12.261149 kernel: audit: type=1300 audit(1769216292.087:810): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec049bee0 a2=3 a3=0 items=0 ppid=1 pid=6171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:12.087000 audit[6171]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec049bee0 a2=3 a3=0 items=0 ppid=1 pid=6171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:12.274683 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:58:12.087000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:12.315616 kernel: audit: type=1327 audit(1769216292.087:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:12.315746 kernel: audit: type=1105 audit(1769216292.301:811): pid=6171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.301000 audit[6171]: USER_START pid=6171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.306000 audit[6175]: CRED_ACQ pid=6175 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.423836 kernel: audit: type=1103 audit(1769216292.306:812): pid=6175 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.777881 sshd[6175]: Connection closed by 10.0.0.1 port 58928 Jan 24 00:58:12.778719 sshd-session[6171]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:12.786000 audit[6171]: USER_END pid=6171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.796939 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:58928.service: Deactivated successfully. Jan 24 00:58:12.815675 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:58:12.848872 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:58:12.853238 kubelet[2926]: E0124 00:58:12.850037 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:58:12.880910 systemd-logind[1597]: Removed session 18. Jan 24 00:58:12.904605 kernel: audit: type=1106 audit(1769216292.786:813): pid=6171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.788000 audit[6171]: CRED_DISP pid=6171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.970602 kernel: audit: type=1104 audit(1769216292.788:814): pid=6171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:12.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.105:22-10.0.0.1:58928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:15.831521 kubelet[2926]: E0124 00:58:15.830842 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:58:17.831515 kubelet[2926]: E0124 00:58:17.811241 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:17.831207 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:60278.service - OpenSSH per-connection server daemon (10.0.0.1:60278). Jan 24 00:58:17.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.105:22-10.0.0.1:60278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:17.863653 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:17.863814 kernel: audit: type=1130 audit(1769216297.842:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.105:22-10.0.0.1:60278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:17.863875 kubelet[2926]: E0124 00:58:17.859913 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:58:17.863875 kubelet[2926]: E0124 00:58:17.860048 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:58:17.863875 kubelet[2926]: E0124 00:58:17.860730 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:58:18.341000 audit[6215]: USER_ACCT pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:18.346633 sshd[6215]: Accepted publickey for core from 10.0.0.1 port 60278 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:18.356650 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:18.394567 systemd-logind[1597]: New session 19 of user core. Jan 24 00:58:18.404580 kernel: audit: type=1101 audit(1769216298.341:817): pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:18.404699 kernel: audit: type=1103 audit(1769216298.348:818): pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:18.348000 audit[6215]: CRED_ACQ pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:18.426682 kernel: audit: type=1006 audit(1769216298.348:819): pid=6215 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 24 00:58:18.348000 audit[6215]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0675fc80 a2=3 a3=0 items=0 ppid=1 pid=6215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:18.476573 kernel: audit: type=1300 audit(1769216298.348:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0675fc80 a2=3 a3=0 items=0 ppid=1 pid=6215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:18.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:18.477090 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:58:18.489512 kernel: audit: type=1327 audit(1769216298.348:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:18.518000 audit[6215]: USER_START pid=6215 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:18.597552 kernel: audit: type=1105 audit(1769216298.518:820): pid=6215 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:18.597697 kernel: audit: type=1103 audit(1769216298.529:821): pid=6219 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:18.529000 audit[6219]: CRED_ACQ pid=6219 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:19.003115 sshd[6219]: Connection closed by 10.0.0.1 port 60278 Jan 24 00:58:19.007609 sshd-session[6215]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:19.009000 audit[6215]: USER_END pid=6215 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:19.035676 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:60278.service: Deactivated successfully. Jan 24 00:58:19.055203 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:58:19.062618 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:58:19.067546 systemd-logind[1597]: Removed session 19. Jan 24 00:58:19.018000 audit[6215]: CRED_DISP pid=6215 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:19.112645 kernel: audit: type=1106 audit(1769216299.009:822): pid=6215 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:19.113841 kernel: audit: type=1104 audit(1769216299.018:823): pid=6215 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:19.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.105:22-10.0.0.1:60278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:19.820615 kubelet[2926]: E0124 00:58:19.819892 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:58:22.819519 kubelet[2926]: E0124 00:58:22.807966 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:24.100654 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:24.100792 kernel: audit: type=1130 audit(1769216304.075:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.105:22-10.0.0.1:37688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:24.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.105:22-10.0.0.1:37688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:24.075756 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:37688.service - OpenSSH per-connection server daemon (10.0.0.1:37688). Jan 24 00:58:24.419000 audit[6236]: USER_ACCT pid=6236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:24.429501 sshd[6236]: Accepted publickey for core from 10.0.0.1 port 37688 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:24.432098 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:24.522653 kernel: audit: type=1101 audit(1769216304.419:826): pid=6236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:24.500539 systemd-logind[1597]: New session 20 of user core. Jan 24 00:58:24.429000 audit[6236]: CRED_ACQ pid=6236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:24.535022 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:58:24.578676 kernel: audit: type=1103 audit(1769216304.429:827): pid=6236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:24.689531 kernel: audit: type=1006 audit(1769216304.429:828): pid=6236 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 24 00:58:24.689684 kernel: audit: type=1300 audit(1769216304.429:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd63f28550 a2=3 a3=0 items=0 ppid=1 pid=6236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:24.429000 audit[6236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd63f28550 a2=3 a3=0 items=0 ppid=1 pid=6236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:24.429000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:24.719120 kernel: audit: type=1327 audit(1769216304.429:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:24.571000 audit[6236]: USER_START pid=6236 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:24.600000 audit[6241]: CRED_ACQ pid=6241 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:24.808189 kernel: audit: type=1105 audit(1769216304.571:829): pid=6236 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:24.815791 kernel: audit: type=1103 audit(1769216304.600:830): pid=6241 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:25.030132 sshd[6241]: Connection closed by 10.0.0.1 port 37688 Jan 24 00:58:25.028592 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:25.037000 audit[6236]: USER_END pid=6236 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:25.062000 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:37688.service: Deactivated successfully. Jan 24 00:58:25.084073 kernel: audit: type=1106 audit(1769216305.037:831): pid=6236 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:25.084217 kernel: audit: type=1104 audit(1769216305.039:832): pid=6236 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:25.039000 audit[6236]: CRED_DISP pid=6236 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:25.082910 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:58:25.094793 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:58:25.109680 systemd-logind[1597]: Removed session 20. Jan 24 00:58:25.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.105:22-10.0.0.1:37688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:25.817673 kubelet[2926]: E0124 00:58:25.815913 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:27.825622 kubelet[2926]: E0124 00:58:27.823101 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:27.837624 containerd[1614]: time="2026-01-24T00:58:27.836952556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:27.965070 containerd[1614]: time="2026-01-24T00:58:27.964876803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:27.973565 containerd[1614]: time="2026-01-24T00:58:27.972922302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:27.973565 containerd[1614]: time="2026-01-24T00:58:27.973121493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:27.975485 kubelet[2926]: E0124 00:58:27.974193 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:27.978788 kubelet[2926]: E0124 00:58:27.977963 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:27.978788 kubelet[2926]: E0124 00:58:27.978166 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:27.980753 kubelet[2926]: E0124 00:58:27.980532 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:58:29.817561 containerd[1614]: time="2026-01-24T00:58:29.815984996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:58:29.956097 containerd[1614]: time="2026-01-24T00:58:29.956032303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:29.966012 containerd[1614]: time="2026-01-24T00:58:29.964794358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:58:29.966012 containerd[1614]: time="2026-01-24T00:58:29.964947763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:29.967843 kubelet[2926]: E0124 00:58:29.967729 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:29.967843 kubelet[2926]: E0124 00:58:29.967793 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:58:29.974825 kubelet[2926]: E0124 00:58:29.967930 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf0f3ad691e64c2d81d0e0aa71a74bd8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:30.013216 containerd[1614]: time="2026-01-24T00:58:30.008661230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:58:30.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.105:22-10.0.0.1:37704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:30.098999 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:37704.service - OpenSSH per-connection server daemon (10.0.0.1:37704). Jan 24 00:58:30.131963 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:30.132073 kernel: audit: type=1130 audit(1769216310.100:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.105:22-10.0.0.1:37704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:30.224501 containerd[1614]: time="2026-01-24T00:58:30.223827035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:30.249192 containerd[1614]: time="2026-01-24T00:58:30.248514388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:58:30.249192 containerd[1614]: time="2026-01-24T00:58:30.248637618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:30.261856 kubelet[2926]: E0124 00:58:30.253487 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:30.261856 kubelet[2926]: E0124 00:58:30.253561 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:58:30.261856 kubelet[2926]: E0124 00:58:30.253721 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:30.261856 kubelet[2926]: E0124 00:58:30.257589 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:58:30.446000 audit[6271]: USER_ACCT pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:30.453515 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 37704 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:30.462111 sshd-session[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:30.492888 kernel: audit: type=1101 audit(1769216310.446:835): pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:30.449000 audit[6271]: CRED_ACQ pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:30.548881 systemd-logind[1597]: New session 21 of user core. Jan 24 00:58:30.577525 kernel: audit: type=1103 audit(1769216310.449:836): pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:30.577611 kernel: audit: type=1006 audit(1769216310.449:837): pid=6271 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 00:58:30.611647 kernel: audit: type=1300 audit(1769216310.449:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebf845220 a2=3 a3=0 items=0 ppid=1 pid=6271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:30.449000 audit[6271]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebf845220 a2=3 a3=0 items=0 ppid=1 pid=6271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:30.712673 kernel: audit: type=1327 audit(1769216310.449:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:30.449000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:30.724733 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:58:30.763000 audit[6271]: USER_START pid=6271 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:30.924063 kernel: audit: type=1105 audit(1769216310.763:838): pid=6271 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:30.924174 kernel: audit: type=1103 audit(1769216310.784:839): pid=6275 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:30.784000 audit[6275]: CRED_ACQ pid=6275 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:31.591960 sshd[6275]: Connection closed by 10.0.0.1 port 37704 Jan 24 00:58:31.604823 sshd-session[6271]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:31.618000 audit[6271]: USER_END pid=6271 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:31.664807 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:37704.service: Deactivated successfully. Jan 24 00:58:31.670971 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:58:31.675638 kernel: audit: type=1106 audit(1769216311.618:840): pid=6271 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:31.674620 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:58:31.678048 systemd-logind[1597]: Removed session 21. Jan 24 00:58:31.618000 audit[6271]: CRED_DISP pid=6271 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:31.710662 kernel: audit: type=1104 audit(1769216311.618:841): pid=6271 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:31.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.105:22-10.0.0.1:37704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:31.819755 containerd[1614]: time="2026-01-24T00:58:31.819693905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:58:31.976077 containerd[1614]: time="2026-01-24T00:58:31.973794913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:31.992538 containerd[1614]: time="2026-01-24T00:58:31.992476649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:58:31.999884 kubelet[2926]: E0124 00:58:31.996650 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:31.999884 kubelet[2926]: E0124 00:58:31.996820 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:58:31.999884 kubelet[2926]: E0124 00:58:31.997116 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:32.002969 containerd[1614]: time="2026-01-24T00:58:31.999882141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:32.008656 containerd[1614]: time="2026-01-24T00:58:32.005236450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:58:32.209772 containerd[1614]: time="2026-01-24T00:58:32.208681456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:32.221525 containerd[1614]: time="2026-01-24T00:58:32.216938579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:58:32.221525 containerd[1614]: time="2026-01-24T00:58:32.217091004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:32.222154 kubelet[2926]: E0124 00:58:32.221721 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:32.222154 kubelet[2926]: E0124 00:58:32.221863 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:58:32.222575 kubelet[2926]: E0124 00:58:32.222137 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brzxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:32.223672 containerd[1614]: time="2026-01-24T00:58:32.223127402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:58:32.231479 kubelet[2926]: E0124 00:58:32.227712 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:58:32.372748 containerd[1614]: time="2026-01-24T00:58:32.369139522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:32.390083 containerd[1614]: time="2026-01-24T00:58:32.387058839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:58:32.390083 containerd[1614]: time="2026-01-24T00:58:32.387188650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:32.401904 kubelet[2926]: E0124 00:58:32.387765 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:32.401904 kubelet[2926]: E0124 00:58:32.387835 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:58:32.401904 kubelet[2926]: E0124 00:58:32.387985 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:32.401904 kubelet[2926]: E0124 00:58:32.389747 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:58:32.825466 containerd[1614]: time="2026-01-24T00:58:32.824988908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:58:32.917743 containerd[1614]: time="2026-01-24T00:58:32.917685768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:32.943056 containerd[1614]: time="2026-01-24T00:58:32.942973997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:58:32.947639 containerd[1614]: time="2026-01-24T00:58:32.943692536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:32.947793 kubelet[2926]: E0124 00:58:32.946006 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:32.947793 kubelet[2926]: E0124 00:58:32.946072 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:58:32.947793 kubelet[2926]: E0124 00:58:32.947683 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5cmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:32.949651 kubelet[2926]: E0124 00:58:32.949463 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:58:33.825837 containerd[1614]: time="2026-01-24T00:58:33.823666229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:58:34.057282 containerd[1614]: time="2026-01-24T00:58:34.056486827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:58:34.064057 containerd[1614]: time="2026-01-24T00:58:34.063851675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:58:34.064057 containerd[1614]: time="2026-01-24T00:58:34.063963162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:58:34.066146 kubelet[2926]: E0124 00:58:34.066061 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:34.067873 kubelet[2926]: E0124 00:58:34.067564 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:58:34.074785 kubelet[2926]: E0124 00:58:34.071560 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf4h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:58:34.075936 kubelet[2926]: E0124 00:58:34.075744 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:58:34.812736 kubelet[2926]: E0124 00:58:34.806806 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:34.812736 kubelet[2926]: E0124 00:58:34.808137 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:58:36.684963 systemd[1]: Started sshd@20-10.0.0.105:22-10.0.0.1:46830.service - OpenSSH per-connection server daemon (10.0.0.1:46830). Jan 24 00:58:36.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.105:22-10.0.0.1:46830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:36.721587 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:36.721755 kernel: audit: type=1130 audit(1769216316.690:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.105:22-10.0.0.1:46830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:37.087000 audit[6304]: USER_ACCT pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.099830 sshd[6304]: Accepted publickey for core from 10.0.0.1 port 46830 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:37.102851 sshd-session[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:37.160774 kernel: audit: type=1101 audit(1769216317.087:844): pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.160913 kernel: audit: type=1103 audit(1769216317.097:845): pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.097000 audit[6304]: CRED_ACQ pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.162518 systemd-logind[1597]: New session 22 of user core. Jan 24 00:58:37.196744 kernel: audit: type=1006 audit(1769216317.097:846): pid=6304 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 00:58:37.199112 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:58:37.097000 audit[6304]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58439550 a2=3 a3=0 items=0 ppid=1 pid=6304 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:37.258222 kernel: audit: type=1300 audit(1769216317.097:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58439550 a2=3 a3=0 items=0 ppid=1 pid=6304 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:37.097000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:37.294507 kernel: audit: type=1327 audit(1769216317.097:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:37.234000 audit[6304]: USER_START pid=6304 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.392984 kernel: audit: type=1105 audit(1769216317.234:847): pid=6304 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.266000 audit[6308]: CRED_ACQ pid=6308 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.442935 kernel: audit: type=1103 audit(1769216317.266:848): pid=6308 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.887597 sshd[6308]: Connection closed by 10.0.0.1 port 46830 Jan 24 00:58:37.889786 sshd-session[6304]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:37.894000 audit[6304]: USER_END pid=6304 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.908702 systemd[1]: sshd@20-10.0.0.105:22-10.0.0.1:46830.service: Deactivated successfully. Jan 24 00:58:37.927989 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:58:37.949235 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:58:37.894000 audit[6304]: CRED_DISP pid=6304 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.966039 systemd-logind[1597]: Removed session 22. Jan 24 00:58:38.004619 kernel: audit: type=1106 audit(1769216317.894:849): pid=6304 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:38.004953 kernel: audit: type=1104 audit(1769216317.894:850): pid=6304 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:37.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.105:22-10.0.0.1:46830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:39.837504 kubelet[2926]: E0124 00:58:39.836876 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:58:41.815112 kubelet[2926]: E0124 00:58:41.815050 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:58:42.951087 systemd[1]: Started sshd@21-10.0.0.105:22-10.0.0.1:47292.service - OpenSSH per-connection server daemon (10.0.0.1:47292). Jan 24 00:58:42.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.105:22-10.0.0.1:47292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:42.969786 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:42.969950 kernel: audit: type=1130 audit(1769216322.950:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.105:22-10.0.0.1:47292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:43.212797 sshd[6323]: Accepted publickey for core from 10.0.0.1 port 47292 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:43.209000 audit[6323]: USER_ACCT pid=6323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.229212 sshd-session[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:43.297057 systemd-logind[1597]: New session 23 of user core. Jan 24 00:58:43.309521 kernel: audit: type=1101 audit(1769216323.209:853): pid=6323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.309668 kernel: audit: type=1103 audit(1769216323.219:854): pid=6323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.219000 audit[6323]: CRED_ACQ pid=6323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.384172 kernel: audit: type=1006 audit(1769216323.220:855): pid=6323 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 24 00:58:43.421111 kernel: audit: type=1300 audit(1769216323.220:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce10b6970 a2=3 a3=0 items=0 ppid=1 pid=6323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:43.220000 audit[6323]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce10b6970 a2=3 a3=0 items=0 ppid=1 pid=6323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:43.416938 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:58:43.549906 kernel: audit: type=1327 audit(1769216323.220:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:43.220000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:43.453000 audit[6323]: USER_START pid=6323 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.610936 kernel: audit: type=1105 audit(1769216323.453:856): pid=6323 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.616728 kernel: audit: type=1103 audit(1769216323.476:857): pid=6327 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.476000 audit[6327]: CRED_ACQ pid=6327 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.921515 sshd[6327]: Connection closed by 10.0.0.1 port 47292 Jan 24 00:58:43.922546 sshd-session[6323]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:43.954991 systemd[1]: Started sshd@22-10.0.0.105:22-10.0.0.1:47298.service - OpenSSH per-connection server daemon (10.0.0.1:47298). Jan 24 00:58:43.954000 audit[6323]: USER_END pid=6323 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.972806 systemd[1]: sshd@21-10.0.0.105:22-10.0.0.1:47292.service: Deactivated successfully. Jan 24 00:58:43.979923 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:58:43.994636 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:58:43.999016 systemd-logind[1597]: Removed session 23. Jan 24 00:58:44.031659 kernel: audit: type=1106 audit(1769216323.954:858): pid=6323 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.954000 audit[6323]: CRED_DISP pid=6323 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:43.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.105:22-10.0.0.1:47298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:43.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.105:22-10.0.0.1:47292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:44.096610 kernel: audit: type=1104 audit(1769216323.954:859): pid=6323 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:44.294000 audit[6340]: USER_ACCT pid=6340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:44.296030 sshd[6340]: Accepted publickey for core from 10.0.0.1 port 47298 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:44.300000 audit[6340]: CRED_ACQ pid=6340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:44.300000 audit[6340]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7163ffb0 a2=3 a3=0 items=0 ppid=1 pid=6340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:44.300000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:44.305001 sshd-session[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:44.330038 systemd-logind[1597]: New session 24 of user core. Jan 24 00:58:44.356689 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:58:44.367000 audit[6340]: USER_START pid=6340 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:44.371000 audit[6347]: CRED_ACQ pid=6347 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:44.815817 kubelet[2926]: E0124 00:58:44.815719 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:58:44.823221 kubelet[2926]: E0124 00:58:44.818483 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:58:44.883480 sshd[6347]: Connection closed by 10.0.0.1 port 47298 Jan 24 00:58:44.883206 sshd-session[6340]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:44.893000 audit[6340]: USER_END pid=6340 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:44.893000 audit[6340]: CRED_DISP pid=6340 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:44.904749 systemd[1]: sshd@22-10.0.0.105:22-10.0.0.1:47298.service: Deactivated successfully. Jan 24 00:58:44.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.105:22-10.0.0.1:47298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:44.924005 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:58:44.934036 systemd-logind[1597]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:58:44.955583 systemd[1]: Started sshd@23-10.0.0.105:22-10.0.0.1:47308.service - OpenSSH per-connection server daemon (10.0.0.1:47308). Jan 24 00:58:44.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.105:22-10.0.0.1:47308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:44.970172 systemd-logind[1597]: Removed session 24. Jan 24 00:58:45.218000 audit[6385]: USER_ACCT pid=6385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:45.221146 sshd[6385]: Accepted publickey for core from 10.0.0.1 port 47308 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:45.222000 audit[6385]: CRED_ACQ pid=6385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:45.226000 audit[6385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc232bbeb0 a2=3 a3=0 items=0 ppid=1 pid=6385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:45.226000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:45.240888 sshd-session[6385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:45.270742 systemd-logind[1597]: New session 25 of user core. Jan 24 00:58:45.286848 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:58:45.309000 audit[6385]: USER_START pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:45.323000 audit[6389]: CRED_ACQ pid=6389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:45.813549 sshd[6389]: Connection closed by 10.0.0.1 port 47308 Jan 24 00:58:45.808681 sshd-session[6385]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:45.814617 kubelet[2926]: E0124 00:58:45.814109 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:58:45.820829 kubelet[2926]: E0124 00:58:45.818767 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:58:45.829000 audit[6385]: USER_END pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:45.829000 audit[6385]: CRED_DISP pid=6385 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:45.847120 systemd-logind[1597]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:58:45.847854 systemd[1]: sshd@23-10.0.0.105:22-10.0.0.1:47308.service: Deactivated successfully. Jan 24 00:58:45.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.105:22-10.0.0.1:47308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:45.868925 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:58:45.899031 systemd-logind[1597]: Removed session 25. Jan 24 00:58:50.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.105:22-10.0.0.1:47322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:50.845595 systemd[1]: Started sshd@24-10.0.0.105:22-10.0.0.1:47322.service - OpenSSH per-connection server daemon (10.0.0.1:47322). Jan 24 00:58:50.869016 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 00:58:50.869906 kernel: audit: type=1130 audit(1769216330.845:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.105:22-10.0.0.1:47322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:51.184736 sshd[6402]: Accepted publickey for core from 10.0.0.1 port 47322 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:51.178000 audit[6402]: USER_ACCT pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:51.227906 kernel: audit: type=1101 audit(1769216331.178:880): pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:51.230000 audit[6402]: CRED_ACQ pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:51.250602 sshd-session[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:51.321041 systemd-logind[1597]: New session 26 of user core. Jan 24 00:58:51.350632 kernel: audit: type=1103 audit(1769216331.230:881): pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:51.350803 kernel: audit: type=1006 audit(1769216331.230:882): pid=6402 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 24 00:58:51.352679 kernel: audit: type=1300 audit(1769216331.230:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb90d46b0 a2=3 a3=0 items=0 ppid=1 pid=6402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:51.230000 audit[6402]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb90d46b0 a2=3 a3=0 items=0 ppid=1 pid=6402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:51.230000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:51.405552 kernel: audit: type=1327 audit(1769216331.230:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:51.408056 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:58:51.421000 audit[6402]: USER_START pid=6402 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:51.480727 kernel: audit: type=1105 audit(1769216331.421:883): pid=6402 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:51.425000 audit[6406]: CRED_ACQ pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:51.522120 kernel: audit: type=1103 audit(1769216331.425:884): pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:52.215814 sshd[6406]: Connection closed by 10.0.0.1 port 47322 Jan 24 00:58:52.224703 sshd-session[6402]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:52.241000 audit[6402]: USER_END pid=6402 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:52.261173 systemd-logind[1597]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:58:52.279151 systemd[1]: sshd@24-10.0.0.105:22-10.0.0.1:47322.service: Deactivated successfully. Jan 24 00:58:52.292229 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:58:52.318055 systemd-logind[1597]: Removed session 26. Jan 24 00:58:52.326806 kernel: audit: type=1106 audit(1769216332.241:885): pid=6402 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:52.241000 audit[6402]: CRED_DISP pid=6402 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:52.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.105:22-10.0.0.1:47322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:52.381119 kernel: audit: type=1104 audit(1769216332.241:886): pid=6402 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:53.654115 containerd[1614]: time="2026-01-24T00:58:53.650776250Z" level=info msg="container event discarded" container=1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a type=CONTAINER_CREATED_EVENT Jan 24 00:58:53.669725 containerd[1614]: time="2026-01-24T00:58:53.669423354Z" level=info msg="container event discarded" container=1bda62d5b3ccbbca38e77522ff18665c75c51d52ec6969ae22dca640504b871a type=CONTAINER_STARTED_EVENT Jan 24 00:58:53.682684 containerd[1614]: time="2026-01-24T00:58:53.682159095Z" level=info msg="container event discarded" container=28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4 type=CONTAINER_CREATED_EVENT Jan 24 00:58:53.682684 containerd[1614]: time="2026-01-24T00:58:53.682544144Z" level=info msg="container event discarded" container=28bc763adbb3fcd4b6306d8ffec46fc5acf514a317d60518091eb1d5c83ea0f4 type=CONTAINER_STARTED_EVENT Jan 24 00:58:53.703953 containerd[1614]: time="2026-01-24T00:58:53.703809396Z" level=info msg="container event discarded" container=e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e type=CONTAINER_CREATED_EVENT Jan 24 00:58:53.703953 containerd[1614]: time="2026-01-24T00:58:53.703947333Z" level=info msg="container event discarded" container=e99e468dd06ea01975e21b5d8e407950defde5ae464d9fb1924843ff9b07b84e type=CONTAINER_STARTED_EVENT Jan 24 00:58:53.852573 kubelet[2926]: E0124 00:58:53.841200 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:58:53.861751 containerd[1614]: time="2026-01-24T00:58:53.842013031Z" level=info msg="container event discarded" container=a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb type=CONTAINER_CREATED_EVENT Jan 24 00:58:53.861751 containerd[1614]: time="2026-01-24T00:58:53.855169123Z" level=info msg="container event discarded" container=3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21 type=CONTAINER_CREATED_EVENT Jan 24 00:58:53.866930 containerd[1614]: time="2026-01-24T00:58:53.866853697Z" level=info msg="container event discarded" container=f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128 type=CONTAINER_CREATED_EVENT Jan 24 00:58:54.245188 containerd[1614]: time="2026-01-24T00:58:54.244411510Z" level=info msg="container event discarded" container=3c1de64c3a3c36cdbfb50c0db33acd532afc4d7cc92226520319a1dfa2a08e21 type=CONTAINER_STARTED_EVENT Jan 24 00:58:54.319094 containerd[1614]: time="2026-01-24T00:58:54.318997979Z" level=info msg="container event discarded" container=a1da15f460b2e73af4509c410e224662dd899ea6f6fd2e336fa27eeab31508eb type=CONTAINER_STARTED_EVENT Jan 24 00:58:54.383072 containerd[1614]: time="2026-01-24T00:58:54.382967518Z" level=info msg="container event discarded" container=f31a9862e08b72a2304bab04f4987dfbdec7843153597d1b959104b826da8128 type=CONTAINER_STARTED_EVENT Jan 24 00:58:54.825187 kubelet[2926]: E0124 00:58:54.825131 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:58:57.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.105:22-10.0.0.1:55688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:57.299631 systemd[1]: Started sshd@25-10.0.0.105:22-10.0.0.1:55688.service - OpenSSH per-connection server daemon (10.0.0.1:55688). Jan 24 00:58:57.306587 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:58:57.306650 kernel: audit: type=1130 audit(1769216337.299:888): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.105:22-10.0.0.1:55688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:57.572000 audit[6422]: USER_ACCT pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.580179 sshd[6422]: Accepted publickey for core from 10.0.0.1 port 55688 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:58:57.593742 sshd-session[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:58:57.585000 audit[6422]: CRED_ACQ pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.625540 systemd-logind[1597]: New session 27 of user core. Jan 24 00:58:57.671235 kernel: audit: type=1101 audit(1769216337.572:889): pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.671588 kernel: audit: type=1103 audit(1769216337.585:890): pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.708632 kernel: audit: type=1006 audit(1769216337.585:891): pid=6422 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 24 00:58:57.708773 kernel: audit: type=1300 audit(1769216337.585:891): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc1e1e290 a2=3 a3=0 items=0 ppid=1 pid=6422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:57.585000 audit[6422]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc1e1e290 a2=3 a3=0 items=0 ppid=1 pid=6422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:58:57.705930 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:58:57.585000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:57.744000 audit[6422]: USER_START pid=6422 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.811737 kernel: audit: type=1327 audit(1769216337.585:891): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:58:57.811880 kernel: audit: type=1105 audit(1769216337.744:892): pid=6422 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.811923 kernel: audit: type=1103 audit(1769216337.765:893): pid=6427 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.765000 audit[6427]: CRED_ACQ pid=6427 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:57.828733 kubelet[2926]: E0124 00:58:57.823673 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:58:58.260687 sshd[6427]: Connection closed by 10.0.0.1 port 55688 Jan 24 00:58:58.262698 sshd-session[6422]: pam_unix(sshd:session): session closed for user core Jan 24 00:58:58.267000 audit[6422]: USER_END pid=6422 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.312103 kernel: audit: type=1106 audit(1769216338.267:894): pid=6422 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.289130 systemd[1]: sshd@25-10.0.0.105:22-10.0.0.1:55688.service: Deactivated successfully. Jan 24 00:58:58.309082 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:58:58.267000 audit[6422]: CRED_DISP pid=6422 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.322625 systemd-logind[1597]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:58:58.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.105:22-10.0.0.1:55688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:58:58.344590 kernel: audit: type=1104 audit(1769216338.267:895): pid=6422 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:58:58.349630 systemd-logind[1597]: Removed session 27. Jan 24 00:58:58.813002 kubelet[2926]: E0124 00:58:58.812861 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:58:59.829916 kubelet[2926]: E0124 00:58:59.824175 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:59:00.833042 kubelet[2926]: E0124 00:59:00.832955 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:59:03.290782 systemd[1]: Started sshd@26-10.0.0.105:22-10.0.0.1:55696.service - OpenSSH per-connection server daemon (10.0.0.1:55696). Jan 24 00:59:03.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.105:22-10.0.0.1:55696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:03.307954 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:03.308425 kernel: audit: type=1130 audit(1769216343.289:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.105:22-10.0.0.1:55696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:03.603000 audit[6441]: USER_ACCT pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:03.605958 sshd[6441]: Accepted publickey for core from 10.0.0.1 port 55696 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:03.614707 sshd-session[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:03.628648 systemd-logind[1597]: New session 28 of user core. Jan 24 00:59:03.674616 kernel: audit: type=1101 audit(1769216343.603:898): pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:03.610000 audit[6441]: CRED_ACQ pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:03.755404 kernel: audit: type=1103 audit(1769216343.610:899): pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:03.755659 kernel: audit: type=1006 audit(1769216343.611:900): pid=6441 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 24 00:59:03.793613 kernel: audit: type=1300 audit(1769216343.611:900): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeda89eea0 a2=3 a3=0 items=0 ppid=1 pid=6441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:03.611000 audit[6441]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeda89eea0 a2=3 a3=0 items=0 ppid=1 pid=6441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:03.794171 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 00:59:03.860210 kernel: audit: type=1327 audit(1769216343.611:900): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:03.611000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:03.824000 audit[6441]: USER_START pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:03.904562 kernel: audit: type=1105 audit(1769216343.824:901): pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:03.845000 audit[6445]: CRED_ACQ pid=6445 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:03.945576 kernel: audit: type=1103 audit(1769216343.845:902): pid=6445 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.312987 sshd[6445]: Connection closed by 10.0.0.1 port 55696 Jan 24 00:59:04.317210 sshd-session[6441]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:04.331000 audit[6441]: USER_END pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.358083 systemd[1]: sshd@26-10.0.0.105:22-10.0.0.1:55696.service: Deactivated successfully. Jan 24 00:59:04.358736 systemd-logind[1597]: Session 28 logged out. Waiting for processes to exit. Jan 24 00:59:04.370118 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 00:59:04.386864 kernel: audit: type=1106 audit(1769216344.331:903): pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.394683 systemd-logind[1597]: Removed session 28. Jan 24 00:59:04.332000 audit[6441]: CRED_DISP pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:04.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.105:22-10.0.0.1:55696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:04.477659 kernel: audit: type=1104 audit(1769216344.332:904): pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:07.851373 kubelet[2926]: E0124 00:59:07.850493 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:59:08.827414 kubelet[2926]: E0124 00:59:08.827215 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:59:09.384724 systemd[1]: Started sshd@27-10.0.0.105:22-10.0.0.1:55710.service - OpenSSH per-connection server daemon (10.0.0.1:55710). Jan 24 00:59:09.402419 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:09.402635 kernel: audit: type=1130 audit(1769216349.384:906): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.105:22-10.0.0.1:55710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:09.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.105:22-10.0.0.1:55710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:09.579000 audit[6459]: USER_ACCT pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.580993 sshd[6459]: Accepted publickey for core from 10.0.0.1 port 55710 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:09.586624 sshd-session[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:09.583000 audit[6459]: CRED_ACQ pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.603781 systemd-logind[1597]: New session 29 of user core. Jan 24 00:59:09.625110 kernel: audit: type=1101 audit(1769216349.579:907): pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.625428 kernel: audit: type=1103 audit(1769216349.583:908): pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.625493 kernel: audit: type=1006 audit(1769216349.583:909): pid=6459 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 24 00:59:09.583000 audit[6459]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6026a70 a2=3 a3=0 items=0 ppid=1 pid=6459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:09.684738 kernel: audit: type=1300 audit(1769216349.583:909): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6026a70 a2=3 a3=0 items=0 ppid=1 pid=6459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:09.684900 kernel: audit: type=1327 audit(1769216349.583:909): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:09.583000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:09.702061 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 00:59:09.722000 audit[6459]: USER_START pid=6459 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.762041 kernel: audit: type=1105 audit(1769216349.722:910): pid=6459 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.751000 audit[6463]: CRED_ACQ pid=6463 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.784389 kernel: audit: type=1103 audit(1769216349.751:911): pid=6463 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:09.819179 kubelet[2926]: E0124 00:59:09.818190 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:59:10.132050 sshd[6463]: Connection closed by 10.0.0.1 port 55710 Jan 24 00:59:10.140860 sshd-session[6459]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:10.145000 audit[6459]: USER_END pid=6459 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.145000 audit[6459]: CRED_DISP pid=6459 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.178663 systemd[1]: sshd@27-10.0.0.105:22-10.0.0.1:55710.service: Deactivated successfully. Jan 24 00:59:10.178725 systemd-logind[1597]: Session 29 logged out. Waiting for processes to exit. Jan 24 00:59:10.200143 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 00:59:10.208106 kernel: audit: type=1106 audit(1769216350.145:912): pid=6459 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.211403 kernel: audit: type=1104 audit(1769216350.145:913): pid=6459 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:10.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.105:22-10.0.0.1:55710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:10.214883 systemd-logind[1597]: Removed session 29. Jan 24 00:59:10.812910 kubelet[2926]: E0124 00:59:10.812688 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:59:11.812839 kubelet[2926]: E0124 00:59:11.809170 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:59:11.812839 kubelet[2926]: E0124 00:59:11.809402 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:59:12.812132 kubelet[2926]: E0124 00:59:12.806730 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:13.822392 kubelet[2926]: E0124 00:59:13.819478 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:15.172226 systemd[1]: Started sshd@28-10.0.0.105:22-10.0.0.1:40814.service - OpenSSH per-connection server daemon (10.0.0.1:40814). Jan 24 00:59:15.182468 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:15.182731 kernel: audit: type=1130 audit(1769216355.172:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.105:22-10.0.0.1:40814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:15.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.105:22-10.0.0.1:40814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:15.372823 kernel: audit: type=1101 audit(1769216355.339:916): pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.339000 audit[6501]: USER_ACCT pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.369016 sshd-session[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:15.373937 sshd[6501]: Accepted publickey for core from 10.0.0.1 port 40814 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:15.353000 audit[6501]: CRED_ACQ pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.420365 systemd-logind[1597]: New session 30 of user core. Jan 24 00:59:15.427419 kernel: audit: type=1103 audit(1769216355.353:917): pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.427523 kernel: audit: type=1006 audit(1769216355.359:918): pid=6501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 24 00:59:15.359000 audit[6501]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd9129c0 a2=3 a3=0 items=0 ppid=1 pid=6501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:15.471367 kernel: audit: type=1300 audit(1769216355.359:918): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd9129c0 a2=3 a3=0 items=0 ppid=1 pid=6501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:15.471628 kernel: audit: type=1327 audit(1769216355.359:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:15.359000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:15.481222 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 00:59:15.492000 audit[6501]: USER_START pid=6501 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.522364 kernel: audit: type=1105 audit(1769216355.492:919): pid=6501 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.507000 audit[6505]: CRED_ACQ pid=6505 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.541387 kernel: audit: type=1103 audit(1769216355.507:920): pid=6505 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.771919 sshd[6505]: Connection closed by 10.0.0.1 port 40814 Jan 24 00:59:15.776141 sshd-session[6501]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:15.784000 audit[6501]: USER_END pid=6501 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.795958 systemd-logind[1597]: Session 30 logged out. Waiting for processes to exit. Jan 24 00:59:15.797148 systemd[1]: sshd@28-10.0.0.105:22-10.0.0.1:40814.service: Deactivated successfully. Jan 24 00:59:15.820450 kernel: audit: type=1106 audit(1769216355.784:921): pid=6501 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.785000 audit[6501]: CRED_DISP pid=6501 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.836962 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 00:59:15.844839 systemd-logind[1597]: Removed session 30. Jan 24 00:59:15.851846 kernel: audit: type=1104 audit(1769216355.785:922): pid=6501 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:15.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.105:22-10.0.0.1:40814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:19.822187 kubelet[2926]: E0124 00:59:19.822067 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:59:20.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.105:22-10.0.0.1:40830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:20.826665 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:20.826718 kernel: audit: type=1130 audit(1769216360.809:924): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.105:22-10.0.0.1:40830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:20.810872 systemd[1]: Started sshd@29-10.0.0.105:22-10.0.0.1:40830.service - OpenSSH per-connection server daemon (10.0.0.1:40830). Jan 24 00:59:21.131000 audit[6521]: USER_ACCT pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.144989 sshd-session[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:21.173464 kernel: audit: type=1101 audit(1769216361.131:925): pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.173528 sshd[6521]: Accepted publickey for core from 10.0.0.1 port 40830 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:21.135000 audit[6521]: CRED_ACQ pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.215110 systemd-logind[1597]: New session 31 of user core. Jan 24 00:59:21.244432 kernel: audit: type=1103 audit(1769216361.135:926): pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.244561 kernel: audit: type=1006 audit(1769216361.135:927): pid=6521 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 24 00:59:21.298073 kernel: audit: type=1300 audit(1769216361.135:927): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc19bed10 a2=3 a3=0 items=0 ppid=1 pid=6521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:21.135000 audit[6521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc19bed10 a2=3 a3=0 items=0 ppid=1 pid=6521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:21.256159 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 24 00:59:21.135000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:21.328848 kernel: audit: type=1327 audit(1769216361.135:927): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:21.289000 audit[6521]: USER_START pid=6521 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.445382 kernel: audit: type=1105 audit(1769216361.289:928): pid=6521 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.298000 audit[6525]: CRED_ACQ pid=6525 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.509443 kernel: audit: type=1103 audit(1769216361.298:929): pid=6525 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.811902 containerd[1614]: time="2026-01-24T00:59:21.805756274Z" level=info msg="container event discarded" container=ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47 type=CONTAINER_CREATED_EVENT Jan 24 00:59:21.811902 containerd[1614]: time="2026-01-24T00:59:21.809157178Z" level=info msg="container event discarded" container=ed8f568d195612b6c9ae2028a8db1ae0f99d6660ddf70d152bdbf74105ab3b47 type=CONTAINER_STARTED_EVENT Jan 24 00:59:21.873440 sshd[6525]: Connection closed by 10.0.0.1 port 40830 Jan 24 00:59:21.874161 sshd-session[6521]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:21.887000 audit[6521]: USER_END pid=6521 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.901176 systemd[1]: sshd@29-10.0.0.105:22-10.0.0.1:40830.service: Deactivated successfully. Jan 24 00:59:21.902482 systemd-logind[1597]: Session 31 logged out. Waiting for processes to exit. Jan 24 00:59:21.906902 systemd[1]: session-31.scope: Deactivated successfully. Jan 24 00:59:21.924851 systemd-logind[1597]: Removed session 31. Jan 24 00:59:21.962443 kernel: audit: type=1106 audit(1769216361.887:930): pid=6521 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.889000 audit[6521]: CRED_DISP pid=6521 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.989961 kernel: audit: type=1104 audit(1769216361.889:931): pid=6521 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:21.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.105:22-10.0.0.1:40830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:22.007987 containerd[1614]: time="2026-01-24T00:59:22.006928675Z" level=info msg="container event discarded" container=732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85 type=CONTAINER_CREATED_EVENT Jan 24 00:59:22.831082 kubelet[2926]: E0124 00:59:22.829435 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:59:22.831082 kubelet[2926]: E0124 00:59:22.830000 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:59:22.856003 kubelet[2926]: E0124 00:59:22.848845 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:59:22.857358 containerd[1614]: time="2026-01-24T00:59:22.857124956Z" level=info msg="container event discarded" container=732058f3a690a793cdb623c778a400185a7cebeb8d44769c7885055b111e3b85 type=CONTAINER_STARTED_EVENT Jan 24 00:59:23.823884 kubelet[2926]: E0124 00:59:23.819241 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:59:24.847834 kubelet[2926]: E0124 00:59:24.846485 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:59:25.538945 containerd[1614]: time="2026-01-24T00:59:25.537691601Z" level=info msg="container event discarded" container=953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134 type=CONTAINER_CREATED_EVENT Jan 24 00:59:25.538945 containerd[1614]: time="2026-01-24T00:59:25.537827584Z" level=info msg="container event discarded" container=953e76c1dd42388eee03697b13734cd61fc13cb0485abe106131dab209c5d134 type=CONTAINER_STARTED_EVENT Jan 24 00:59:26.809792 kubelet[2926]: E0124 00:59:26.809597 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:26.918921 systemd[1]: Started sshd@30-10.0.0.105:22-10.0.0.1:42860.service - OpenSSH per-connection server daemon (10.0.0.1:42860). Jan 24 00:59:26.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.105:22-10.0.0.1:42860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:26.927483 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:26.927595 kernel: audit: type=1130 audit(1769216366.918:933): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.105:22-10.0.0.1:42860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:27.159000 audit[6541]: USER_ACCT pid=6541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.163563 sshd[6541]: Accepted publickey for core from 10.0.0.1 port 42860 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:27.180038 sshd-session[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:27.193536 kernel: audit: type=1101 audit(1769216367.159:934): pid=6541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.170000 audit[6541]: CRED_ACQ pid=6541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.219394 systemd-logind[1597]: New session 32 of user core. Jan 24 00:59:27.269421 kernel: audit: type=1103 audit(1769216367.170:935): pid=6541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.173000 audit[6541]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebb9316f0 a2=3 a3=0 items=0 ppid=1 pid=6541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:27.324832 kernel: audit: type=1006 audit(1769216367.173:936): pid=6541 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 24 00:59:27.324973 kernel: audit: type=1300 audit(1769216367.173:936): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebb9316f0 a2=3 a3=0 items=0 ppid=1 pid=6541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:27.325005 kernel: audit: type=1327 audit(1769216367.173:936): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:27.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:27.326852 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 24 00:59:27.355000 audit[6541]: USER_START pid=6541 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.363000 audit[6545]: CRED_ACQ pid=6545 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.461488 kernel: audit: type=1105 audit(1769216367.355:937): pid=6541 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.461749 kernel: audit: type=1103 audit(1769216367.363:938): pid=6545 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.893566 sshd[6545]: Connection closed by 10.0.0.1 port 42860 Jan 24 00:59:27.898070 sshd-session[6541]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:27.908000 audit[6541]: USER_END pid=6541 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.951072 systemd[1]: sshd@30-10.0.0.105:22-10.0.0.1:42860.service: Deactivated successfully. Jan 24 00:59:27.974538 kernel: audit: type=1106 audit(1769216367.908:939): pid=6541 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.971564 systemd[1]: session-32.scope: Deactivated successfully. Jan 24 00:59:27.914000 audit[6541]: CRED_DISP pid=6541 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:27.985115 systemd-logind[1597]: Session 32 logged out. Waiting for processes to exit. Jan 24 00:59:27.987073 systemd-logind[1597]: Removed session 32. Jan 24 00:59:27.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.105:22-10.0.0.1:42860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:28.015478 kernel: audit: type=1104 audit(1769216367.914:940): pid=6541 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:30.822875 kubelet[2926]: E0124 00:59:30.822492 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:59:32.998167 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:32.998497 kernel: audit: type=1130 audit(1769216372.959:942): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.105:22-10.0.0.1:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:32.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.105:22-10.0.0.1:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:32.959970 systemd[1]: Started sshd@31-10.0.0.105:22-10.0.0.1:45636.service - OpenSSH per-connection server daemon (10.0.0.1:45636). Jan 24 00:59:33.238000 audit[6562]: USER_ACCT pid=6562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.239859 sshd[6562]: Accepted publickey for core from 10.0.0.1 port 45636 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:33.266145 sshd-session[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:33.296486 kernel: audit: type=1101 audit(1769216373.238:943): pid=6562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.298185 kernel: audit: type=1103 audit(1769216373.254:944): pid=6562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.254000 audit[6562]: CRED_ACQ pid=6562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.361432 systemd-logind[1597]: New session 33 of user core. Jan 24 00:59:33.368170 kernel: audit: type=1006 audit(1769216373.254:945): pid=6562 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 24 00:59:33.368377 kernel: audit: type=1300 audit(1769216373.254:945): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5f3afd10 a2=3 a3=0 items=0 ppid=1 pid=6562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:33.254000 audit[6562]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5f3afd10 a2=3 a3=0 items=0 ppid=1 pid=6562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:33.254000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:33.428760 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 24 00:59:33.454488 kernel: audit: type=1327 audit(1769216373.254:945): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:33.456000 audit[6562]: USER_START pid=6562 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.495579 kernel: audit: type=1105 audit(1769216373.456:946): pid=6562 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.495808 kernel: audit: type=1103 audit(1769216373.464:947): pid=6566 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.464000 audit[6566]: CRED_ACQ pid=6566 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.810070 kubelet[2926]: E0124 00:59:33.809938 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:59:33.900215 sshd[6566]: Connection closed by 10.0.0.1 port 45636 Jan 24 00:59:33.902571 sshd-session[6562]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:33.912000 audit[6562]: USER_END pid=6562 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.927910 systemd-logind[1597]: Session 33 logged out. Waiting for processes to exit. Jan 24 00:59:33.930442 systemd[1]: sshd@31-10.0.0.105:22-10.0.0.1:45636.service: Deactivated successfully. Jan 24 00:59:33.954709 systemd[1]: session-33.scope: Deactivated successfully. Jan 24 00:59:33.965921 kernel: audit: type=1106 audit(1769216373.912:948): pid=6562 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.966058 kernel: audit: type=1104 audit(1769216373.913:949): pid=6562 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.913000 audit[6562]: CRED_DISP pid=6562 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:33.979001 systemd-logind[1597]: Removed session 33. Jan 24 00:59:33.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.105:22-10.0.0.1:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:35.705552 containerd[1614]: time="2026-01-24T00:59:35.705204897Z" level=info msg="container event discarded" container=5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad type=CONTAINER_CREATED_EVENT Jan 24 00:59:35.834485 kubelet[2926]: E0124 00:59:35.833485 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:59:35.859524 kubelet[2926]: E0124 00:59:35.859032 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:59:36.407051 containerd[1614]: time="2026-01-24T00:59:36.406873754Z" level=info msg="container event discarded" container=5272d590239c7e45c8ebd1d6bb5ec3323c43e1caa48e32fa7f2c35ed574800ad type=CONTAINER_STARTED_EVENT Jan 24 00:59:36.818446 kubelet[2926]: E0124 00:59:36.817937 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:59:37.811868 kubelet[2926]: E0124 00:59:37.811816 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:37.827871 kubelet[2926]: E0124 00:59:37.827460 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:59:38.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.105:22-10.0.0.1:45642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:38.973989 systemd[1]: Started sshd@32-10.0.0.105:22-10.0.0.1:45642.service - OpenSSH per-connection server daemon (10.0.0.1:45642). Jan 24 00:59:39.012386 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:39.012570 kernel: audit: type=1130 audit(1769216378.973:951): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.105:22-10.0.0.1:45642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:39.245829 sshd[6581]: Accepted publickey for core from 10.0.0.1 port 45642 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:39.244000 audit[6581]: USER_ACCT pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:39.250753 sshd-session[6581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:39.244000 audit[6581]: CRED_ACQ pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:39.299150 systemd-logind[1597]: New session 34 of user core. Jan 24 00:59:39.313766 kernel: audit: type=1101 audit(1769216379.244:952): pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:39.313876 kernel: audit: type=1103 audit(1769216379.244:953): pid=6581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:39.357665 kernel: audit: type=1006 audit(1769216379.248:954): pid=6581 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 24 00:59:39.357805 kernel: audit: type=1300 audit(1769216379.248:954): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd89553a0 a2=3 a3=0 items=0 ppid=1 pid=6581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:39.248000 audit[6581]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd89553a0 a2=3 a3=0 items=0 ppid=1 pid=6581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:39.357873 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 24 00:59:39.248000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:39.426505 kernel: audit: type=1327 audit(1769216379.248:954): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:39.426737 kernel: audit: type=1105 audit(1769216379.391:955): pid=6581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:39.391000 audit[6581]: USER_START pid=6581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:39.481495 kernel: audit: type=1103 audit(1769216379.411:956): pid=6585 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:39.411000 audit[6585]: CRED_ACQ pid=6585 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.200878 sshd[6585]: Connection closed by 10.0.0.1 port 45642 Jan 24 00:59:40.209024 sshd-session[6581]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:40.215000 audit[6581]: USER_END pid=6581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.225864 systemd-logind[1597]: Session 34 logged out. Waiting for processes to exit. Jan 24 00:59:40.227065 systemd[1]: sshd@32-10.0.0.105:22-10.0.0.1:45642.service: Deactivated successfully. Jan 24 00:59:40.253535 systemd[1]: session-34.scope: Deactivated successfully. Jan 24 00:59:40.273533 kernel: audit: type=1106 audit(1769216380.215:957): pid=6581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.216000 audit[6581]: CRED_DISP pid=6581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.302016 kernel: audit: type=1104 audit(1769216380.216:958): pid=6581 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:40.275653 systemd-logind[1597]: Removed session 34. Jan 24 00:59:40.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.105:22-10.0.0.1:45642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:40.813380 kubelet[2926]: E0124 00:59:40.813179 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:44.807382 kubelet[2926]: E0124 00:59:44.807039 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:44.810422 kubelet[2926]: E0124 00:59:44.810074 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:59:44.812751 kubelet[2926]: E0124 00:59:44.812705 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:59:45.296075 systemd[1]: Started sshd@33-10.0.0.105:22-10.0.0.1:52348.service - OpenSSH per-connection server daemon (10.0.0.1:52348). Jan 24 00:59:45.312869 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:45.312987 kernel: audit: type=1130 audit(1769216385.298:960): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.105:22-10.0.0.1:52348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:45.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.105:22-10.0.0.1:52348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:45.664000 audit[6629]: USER_ACCT pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:45.673725 sshd[6629]: Accepted publickey for core from 10.0.0.1 port 52348 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:45.682839 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:45.710042 systemd-logind[1597]: New session 35 of user core. Jan 24 00:59:45.674000 audit[6629]: CRED_ACQ pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:45.811379 kubelet[2926]: E0124 00:59:45.807206 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:45.816410 kernel: audit: type=1101 audit(1769216385.664:961): pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:45.817039 kernel: audit: type=1103 audit(1769216385.674:962): pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:45.817125 kernel: audit: type=1006 audit(1769216385.674:963): pid=6629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 24 00:59:45.840188 kernel: audit: type=1300 audit(1769216385.674:963): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff12e320 a2=3 a3=0 items=0 ppid=1 pid=6629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:45.674000 audit[6629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff12e320 a2=3 a3=0 items=0 ppid=1 pid=6629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:45.885160 kernel: audit: type=1327 audit(1769216385.674:963): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:45.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:45.914426 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 24 00:59:45.945000 audit[6629]: USER_START pid=6629 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.004576 kernel: audit: type=1105 audit(1769216385.945:964): pid=6629 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:45.984000 audit[6635]: CRED_ACQ pid=6635 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.070218 kernel: audit: type=1103 audit(1769216385.984:965): pid=6635 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:46.823453 kubelet[2926]: E0124 00:59:46.820018 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 00:59:47.057588 sshd[6635]: Connection closed by 10.0.0.1 port 52348 Jan 24 00:59:47.060703 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:47.065000 audit[6629]: USER_END pid=6629 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.140573 kernel: audit: type=1106 audit(1769216387.065:966): pid=6629 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.088000 audit[6629]: CRED_DISP pid=6629 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.152457 systemd[1]: sshd@33-10.0.0.105:22-10.0.0.1:52348.service: Deactivated successfully. Jan 24 00:59:47.164101 systemd[1]: session-35.scope: Deactivated successfully. Jan 24 00:59:47.174401 systemd-logind[1597]: Session 35 logged out. Waiting for processes to exit. Jan 24 00:59:47.183705 systemd-logind[1597]: Removed session 35. Jan 24 00:59:47.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.105:22-10.0.0.1:52348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:47.217680 kernel: audit: type=1104 audit(1769216387.088:967): pid=6629 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:47.826971 kubelet[2926]: E0124 00:59:47.826892 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 00:59:47.831238 kubelet[2926]: E0124 00:59:47.828634 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 00:59:49.817941 kubelet[2926]: E0124 00:59:49.817879 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 00:59:52.126411 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:52.126672 kernel: audit: type=1130 audit(1769216392.099:969): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.105:22-10.0.0.1:52358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:52.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.105:22-10.0.0.1:52358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:52.100393 systemd[1]: Started sshd@34-10.0.0.105:22-10.0.0.1:52358.service - OpenSSH per-connection server daemon (10.0.0.1:52358). Jan 24 00:59:52.338000 audit[6652]: USER_ACCT pid=6652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.344724 sshd[6652]: Accepted publickey for core from 10.0.0.1 port 52358 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:52.354190 sshd-session[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:52.379093 systemd-logind[1597]: New session 36 of user core. Jan 24 00:59:52.382793 kernel: audit: type=1101 audit(1769216392.338:970): pid=6652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.347000 audit[6652]: CRED_ACQ pid=6652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.387051 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 24 00:59:52.421685 kernel: audit: type=1103 audit(1769216392.347:971): pid=6652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.422396 kernel: audit: type=1006 audit(1769216392.347:972): pid=6652 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 24 00:59:52.347000 audit[6652]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc24c71990 a2=3 a3=0 items=0 ppid=1 pid=6652 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:52.557652 kernel: audit: type=1300 audit(1769216392.347:972): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc24c71990 a2=3 a3=0 items=0 ppid=1 pid=6652 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:52.557821 kernel: audit: type=1327 audit(1769216392.347:972): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:52.347000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:52.396000 audit[6652]: USER_START pid=6652 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.406000 audit[6656]: CRED_ACQ pid=6656 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.629014 kernel: audit: type=1105 audit(1769216392.396:973): pid=6652 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.629118 kernel: audit: type=1103 audit(1769216392.406:974): pid=6656 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.785516 sshd[6656]: Connection closed by 10.0.0.1 port 52358 Jan 24 00:59:52.786061 sshd-session[6652]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:52.791000 audit[6652]: USER_END pid=6652 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.801791 systemd-logind[1597]: Session 36 logged out. Waiting for processes to exit. Jan 24 00:59:52.803604 systemd[1]: sshd@34-10.0.0.105:22-10.0.0.1:52358.service: Deactivated successfully. Jan 24 00:59:52.808630 kubelet[2926]: E0124 00:59:52.808520 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:52.810941 systemd[1]: session-36.scope: Deactivated successfully. Jan 24 00:59:52.822091 systemd-logind[1597]: Removed session 36. Jan 24 00:59:52.827513 kernel: audit: type=1106 audit(1769216392.791:975): pid=6652 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.792000 audit[6652]: CRED_DISP pid=6652 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.866714 kernel: audit: type=1104 audit(1769216392.792:976): pid=6652 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:52.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.105:22-10.0.0.1:52358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:55.815969 containerd[1614]: time="2026-01-24T00:59:55.815924911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:59:55.935052 containerd[1614]: time="2026-01-24T00:59:55.934719239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:59:55.944860 containerd[1614]: time="2026-01-24T00:59:55.944598479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:59:55.944860 containerd[1614]: time="2026-01-24T00:59:55.944725234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:59:55.945553 kubelet[2926]: E0124 00:59:55.945491 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:59:55.948571 kubelet[2926]: E0124 00:59:55.948526 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:59:55.949604 kubelet[2926]: E0124 00:59:55.949541 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:55.971700 containerd[1614]: time="2026-01-24T00:59:55.963501748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:59:56.050075 containerd[1614]: time="2026-01-24T00:59:56.049881398Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:59:56.057745 containerd[1614]: time="2026-01-24T00:59:56.057098667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:59:56.057745 containerd[1614]: time="2026-01-24T00:59:56.057477829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:59:56.058808 kubelet[2926]: E0124 00:59:56.058504 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:59:56.058808 kubelet[2926]: E0124 00:59:56.058652 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:59:56.058808 kubelet[2926]: E0124 00:59:56.058822 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5zq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-htrd2_calico-system(1351988d-2da1-448e-bfda-fb7490691684): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:56.061006 kubelet[2926]: E0124 00:59:56.060941 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 00:59:57.811923 systemd[1]: Started sshd@35-10.0.0.105:22-10.0.0.1:42604.service - OpenSSH per-connection server daemon (10.0.0.1:42604). Jan 24 00:59:57.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.105:22-10.0.0.1:42604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:57.834804 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:59:57.834907 kernel: audit: type=1130 audit(1769216397.810:978): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.105:22-10.0.0.1:42604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:58.010000 audit[6673]: USER_ACCT pid=6673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.022101 sshd[6673]: Accepted publickey for core from 10.0.0.1 port 42604 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 00:59:58.025786 sshd-session[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:58.014000 audit[6673]: CRED_ACQ pid=6673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.063979 systemd-logind[1597]: New session 37 of user core. Jan 24 00:59:58.076627 kernel: audit: type=1101 audit(1769216398.010:979): pid=6673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.076712 kernel: audit: type=1103 audit(1769216398.014:980): pid=6673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.094028 kernel: audit: type=1006 audit(1769216398.014:981): pid=6673 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 24 00:59:58.098186 kernel: audit: type=1300 audit(1769216398.014:981): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1f07b790 a2=3 a3=0 items=0 ppid=1 pid=6673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:58.014000 audit[6673]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1f07b790 a2=3 a3=0 items=0 ppid=1 pid=6673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:59:58.014000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:58.132022 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 24 00:59:58.145075 kernel: audit: type=1327 audit(1769216398.014:981): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:59:58.156000 audit[6673]: USER_START pid=6673 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.164000 audit[6677]: CRED_ACQ pid=6677 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.213587 kernel: audit: type=1105 audit(1769216398.156:982): pid=6673 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.214691 kernel: audit: type=1103 audit(1769216398.164:983): pid=6677 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.490316 sshd[6677]: Connection closed by 10.0.0.1 port 42604 Jan 24 00:59:58.493702 sshd-session[6673]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:58.498000 audit[6673]: USER_END pid=6673 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.514773 systemd-logind[1597]: Session 37 logged out. Waiting for processes to exit. Jan 24 00:59:58.516144 systemd[1]: sshd@35-10.0.0.105:22-10.0.0.1:42604.service: Deactivated successfully. Jan 24 00:59:58.544615 kernel: audit: type=1106 audit(1769216398.498:984): pid=6673 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.499000 audit[6673]: CRED_DISP pid=6673 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.573487 kernel: audit: type=1104 audit(1769216398.499:985): pid=6673 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 00:59:58.549856 systemd[1]: session-37.scope: Deactivated successfully. Jan 24 00:59:58.567846 systemd-logind[1597]: Removed session 37. Jan 24 00:59:58.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.105:22-10.0.0.1:42604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:59:58.817534 containerd[1614]: time="2026-01-24T00:59:58.812607927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:59:58.890743 containerd[1614]: time="2026-01-24T00:59:58.890471677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:59:58.905694 containerd[1614]: time="2026-01-24T00:59:58.902585746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:59:58.905694 containerd[1614]: time="2026-01-24T00:59:58.902708063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:59:58.905923 kubelet[2926]: E0124 00:59:58.903495 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:58.905923 kubelet[2926]: E0124 00:59:58.903560 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:59:58.905923 kubelet[2926]: E0124 00:59:58.903823 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-8vwmm_calico-apiserver(bf242984-56a4-4914-9f0b-44fbe621897e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:58.905923 kubelet[2926]: E0124 00:59:58.905070 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 00:59:58.914748 containerd[1614]: time="2026-01-24T00:59:58.912694888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:59:59.012476 containerd[1614]: time="2026-01-24T00:59:59.011973910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:59:59.024482 containerd[1614]: time="2026-01-24T00:59:59.023812323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:59:59.024482 containerd[1614]: time="2026-01-24T00:59:59.023935000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:59:59.026955 kubelet[2926]: E0124 00:59:59.026659 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:59:59.026955 kubelet[2926]: E0124 00:59:59.026729 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:59:59.027078 kubelet[2926]: E0124 00:59:59.026905 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brzxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5tr2c_calico-system(c89836fa-dd95-4cb6-925a-be9fc6a96ed3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:59:59.030927 kubelet[2926]: E0124 00:59:59.030887 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:00:01.814660 containerd[1614]: time="2026-01-24T01:00:01.813707950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 01:00:03.291755 containerd[1614]: time="2026-01-24T01:00:03.291658256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:03.304050 containerd[1614]: time="2026-01-24T01:00:03.303919896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:03.304537 containerd[1614]: time="2026-01-24T01:00:03.304489663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 01:00:03.304968 kubelet[2926]: E0124 01:00:03.304920 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 01:00:03.309066 kubelet[2926]: E0124 01:00:03.308746 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 01:00:03.312745 kubelet[2926]: E0124 01:00:03.309707 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5cmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7797c85599-jzs5d_calico-system(1f065d32-b76b-4b45-b859-d08ade23f4a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:03.313034 containerd[1614]: time="2026-01-24T01:00:03.309754076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 01:00:03.318820 kubelet[2926]: E0124 01:00:03.317926 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 01:00:03.576478 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:03.576694 kernel: audit: type=1130 audit(1769216403.519:987): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.105:22-10.0.0.1:35014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:03.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.105:22-10.0.0.1:35014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:03.520656 systemd[1]: Started sshd@36-10.0.0.105:22-10.0.0.1:35014.service - OpenSSH per-connection server daemon (10.0.0.1:35014). Jan 24 01:00:03.871000 audit[6699]: USER_ACCT pid=6699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:03.874167 sshd[6699]: Accepted publickey for core from 10.0.0.1 port 35014 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:03.883204 sshd-session[6699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:03.878000 audit[6699]: CRED_ACQ pid=6699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:03.926493 systemd-logind[1597]: New session 38 of user core. Jan 24 01:00:03.955565 kernel: audit: type=1101 audit(1769216403.871:988): pid=6699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:03.955712 kernel: audit: type=1103 audit(1769216403.878:989): pid=6699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:03.955764 kernel: audit: type=1006 audit(1769216403.878:990): pid=6699 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 24 01:00:03.973626 kernel: audit: type=1300 audit(1769216403.878:990): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec2733960 a2=3 a3=0 items=0 ppid=1 pid=6699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:03.878000 audit[6699]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec2733960 a2=3 a3=0 items=0 ppid=1 pid=6699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:03.982897 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 24 01:00:04.006681 kernel: audit: type=1327 audit(1769216403.878:990): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:03.878000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:04.001000 audit[6699]: USER_START pid=6699 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.062548 kernel: audit: type=1105 audit(1769216404.001:991): pid=6699 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.011000 audit[6703]: CRED_ACQ pid=6703 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.112801 kernel: audit: type=1103 audit(1769216404.011:992): pid=6703 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.459527 sshd[6703]: Connection closed by 10.0.0.1 port 35014 Jan 24 01:00:04.467496 sshd-session[6699]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:04.472000 audit[6699]: USER_END pid=6699 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.482942 systemd[1]: sshd@36-10.0.0.105:22-10.0.0.1:35014.service: Deactivated successfully. Jan 24 01:00:04.496182 systemd[1]: session-38.scope: Deactivated successfully. Jan 24 01:00:04.517238 systemd[1]: Started sshd@37-10.0.0.105:22-10.0.0.1:35026.service - OpenSSH per-connection server daemon (10.0.0.1:35026). Jan 24 01:00:04.521076 systemd-logind[1597]: Session 38 logged out. Waiting for processes to exit. Jan 24 01:00:04.525040 systemd-logind[1597]: Removed session 38. Jan 24 01:00:04.473000 audit[6699]: CRED_DISP pid=6699 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.576949 kubelet[2926]: E0124 01:00:04.568790 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 01:00:04.576949 kubelet[2926]: E0124 01:00:04.569738 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 01:00:04.576949 kubelet[2926]: E0124 01:00:04.571846 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf4h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b956dc89b-tb4xl_calico-apiserver(12fcbf43-6c31-4160-9172-b8eee7f25a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:04.576949 kubelet[2926]: E0124 01:00:04.574085 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 01:00:04.579036 containerd[1614]: time="2026-01-24T01:00:04.556595387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:04.579036 containerd[1614]: time="2026-01-24T01:00:04.563783618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 01:00:04.579036 containerd[1614]: time="2026-01-24T01:00:04.563893663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:04.604682 kernel: audit: type=1106 audit(1769216404.472:993): pid=6699 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.604830 kernel: audit: type=1104 audit(1769216404.473:994): pid=6699 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.105:22-10.0.0.1:35014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:04.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.105:22-10.0.0.1:35026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:04.784000 audit[6717]: USER_ACCT pid=6717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.789065 sshd[6717]: Accepted publickey for core from 10.0.0.1 port 35026 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:04.789000 audit[6717]: CRED_ACQ pid=6717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.789000 audit[6717]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4ff3c290 a2=3 a3=0 items=0 ppid=1 pid=6717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:04.789000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:04.793917 sshd-session[6717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:04.818981 containerd[1614]: time="2026-01-24T01:00:04.818856642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 01:00:04.820564 systemd-logind[1597]: New session 39 of user core. Jan 24 01:00:04.832835 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 24 01:00:04.861000 audit[6717]: USER_START pid=6717 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:04.872000 audit[6721]: CRED_ACQ pid=6721 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:05.648994 containerd[1614]: time="2026-01-24T01:00:05.648936633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:05.668104 containerd[1614]: time="2026-01-24T01:00:05.667916747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 01:00:05.668104 containerd[1614]: time="2026-01-24T01:00:05.668050055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:05.670490 kubelet[2926]: E0124 01:00:05.668979 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 01:00:05.670490 kubelet[2926]: E0124 01:00:05.669040 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 01:00:05.670490 kubelet[2926]: E0124 01:00:05.669168 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf0f3ad691e64c2d81d0e0aa71a74bd8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:05.684900 containerd[1614]: time="2026-01-24T01:00:05.683870689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 01:00:06.280768 sshd[6721]: Connection closed by 10.0.0.1 port 35026 Jan 24 01:00:06.282171 sshd-session[6717]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:06.293000 audit[6717]: USER_END pid=6717 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:06.293000 audit[6717]: CRED_DISP pid=6717 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:06.309786 systemd[1]: Started sshd@38-10.0.0.105:22-10.0.0.1:35042.service - OpenSSH per-connection server daemon (10.0.0.1:35042). Jan 24 01:00:06.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.105:22-10.0.0.1:35042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:06.318713 systemd[1]: sshd@37-10.0.0.105:22-10.0.0.1:35026.service: Deactivated successfully. Jan 24 01:00:06.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.105:22-10.0.0.1:35026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:06.325911 systemd[1]: session-39.scope: Deactivated successfully. Jan 24 01:00:06.335747 systemd-logind[1597]: Session 39 logged out. Waiting for processes to exit. Jan 24 01:00:06.340185 systemd-logind[1597]: Removed session 39. Jan 24 01:00:06.519000 audit[6731]: USER_ACCT pid=6731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:06.529091 sshd[6731]: Accepted publickey for core from 10.0.0.1 port 35042 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:06.534000 audit[6731]: CRED_ACQ pid=6731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:06.537000 audit[6731]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc9190ab0 a2=3 a3=0 items=0 ppid=1 pid=6731 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:06.537000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:06.557667 sshd-session[6731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:06.587228 containerd[1614]: time="2026-01-24T01:00:06.586967270Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 01:00:06.599559 systemd-logind[1597]: New session 40 of user core. Jan 24 01:00:06.617743 containerd[1614]: time="2026-01-24T01:00:06.617667526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 01:00:06.626004 kubelet[2926]: E0124 01:00:06.621604 2926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 01:00:06.626004 kubelet[2926]: E0124 01:00:06.621680 2926 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 01:00:06.626004 kubelet[2926]: E0124 01:00:06.621936 2926 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h9rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf8bbd55-mfbxj_calico-system(e539ab3d-80aa-4b97-836f-149823e6c41d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 01:00:06.626785 kubelet[2926]: E0124 01:00:06.626672 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 01:00:06.627049 containerd[1614]: time="2026-01-24T01:00:06.621230044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 01:00:06.657625 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 24 01:00:06.672000 audit[6731]: USER_START pid=6731 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:06.684000 audit[6738]: CRED_ACQ pid=6738 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:09.000389 kubelet[2926]: E0124 01:00:08.998945 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 01:00:10.890592 kubelet[2926]: E0124 01:00:10.865235 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 01:00:11.323725 sshd[6738]: Connection closed by 10.0.0.1 port 35042 Jan 24 01:00:11.327180 sshd-session[6731]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:11.416597 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 24 01:00:11.429704 kernel: audit: type=1106 audit(1769216411.329:1011): pid=6731 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.329000 audit[6731]: USER_END pid=6731 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.413000 audit[6731]: CRED_DISP pid=6731 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.489080 systemd[1]: sshd@38-10.0.0.105:22-10.0.0.1:35042.service: Deactivated successfully. Jan 24 01:00:11.508075 systemd[1]: session-40.scope: Deactivated successfully. Jan 24 01:00:11.514064 systemd[1]: session-40.scope: Consumed 1.129s CPU time, 46.6M memory peak. Jan 24 01:00:11.555659 kernel: audit: type=1104 audit(1769216411.413:1012): pid=6731 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:11.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.105:22-10.0.0.1:35042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:11.559632 systemd-logind[1597]: Session 40 logged out. Waiting for processes to exit. Jan 24 01:00:11.578122 systemd[1]: Started sshd@39-10.0.0.105:22-10.0.0.1:35050.service - OpenSSH per-connection server daemon (10.0.0.1:35050). Jan 24 01:00:11.604989 systemd-logind[1597]: Removed session 40. Jan 24 01:00:11.614833 kernel: audit: type=1131 audit(1769216411.491:1013): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.105:22-10.0.0.1:35042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:11.702160 kernel: audit: type=1130 audit(1769216411.577:1014): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.105:22-10.0.0.1:35050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:11.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.105:22-10.0.0.1:35050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:11.828703 kubelet[2926]: E0124 01:00:11.826882 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:00:12.079000 audit[6774]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=6774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:00:12.166585 kernel: audit: type=1325 audit(1769216412.079:1015): table=filter:146 family=2 entries=26 op=nft_register_rule pid=6774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:00:12.079000 audit[6774]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc30acddf0 a2=0 a3=7ffc30acdddc items=0 ppid=3030 pid=6774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:12.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:00:12.358506 kernel: audit: type=1300 audit(1769216412.079:1015): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc30acddf0 a2=0 a3=7ffc30acdddc items=0 ppid=3030 pid=6774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:12.358630 kernel: audit: type=1327 audit(1769216412.079:1015): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:00:12.364214 sshd[6773]: Accepted publickey for core from 10.0.0.1 port 35050 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:12.380995 sshd-session[6773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:12.360000 audit[6773]: USER_ACCT pid=6773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.428801 systemd-logind[1597]: New session 41 of user core. Jan 24 01:00:12.493704 kernel: audit: type=1101 audit(1769216412.360:1016): pid=6773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.494204 kernel: audit: type=1325 audit(1769216412.361:1017): table=nat:147 family=2 entries=20 op=nft_register_rule pid=6774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:00:12.361000 audit[6774]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=6774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:00:12.513800 kernel: audit: type=1103 audit(1769216412.374:1018): pid=6773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.374000 audit[6773]: CRED_ACQ pid=6773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.374000 audit[6773]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb6b5a8e0 a2=3 a3=0 items=0 ppid=1 pid=6773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:12.374000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:12.361000 audit[6774]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc30acddf0 a2=0 a3=0 items=0 ppid=3030 pid=6774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:12.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:00:12.570916 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 24 01:00:12.599000 audit[6773]: USER_START pid=6773 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.612000 audit[6779]: CRED_ACQ pid=6779 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:12.710000 audit[6781]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=6781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:00:12.710000 audit[6781]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd97311790 a2=0 a3=7ffd9731177c items=0 ppid=3030 pid=6781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:12.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:00:12.760000 audit[6781]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=6781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:00:12.760000 audit[6781]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd97311790 a2=0 a3=0 items=0 ppid=3030 pid=6781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:12.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:00:15.620578 sshd[6779]: Connection closed by 10.0.0.1 port 35050 Jan 24 01:00:15.629759 sshd-session[6773]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:15.634000 audit[6773]: USER_END pid=6773 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:15.634000 audit[6773]: CRED_DISP pid=6773 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:15.731596 systemd[1]: sshd@39-10.0.0.105:22-10.0.0.1:35050.service: Deactivated successfully. Jan 24 01:00:15.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.105:22-10.0.0.1:35050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:15.763837 systemd[1]: session-41.scope: Deactivated successfully. Jan 24 01:00:15.779628 systemd-logind[1597]: Session 41 logged out. Waiting for processes to exit. Jan 24 01:00:15.802943 systemd-logind[1597]: Removed session 41. Jan 24 01:00:15.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.105:22-10.0.0.1:36824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:15.861242 systemd[1]: Started sshd@40-10.0.0.105:22-10.0.0.1:36824.service - OpenSSH per-connection server daemon (10.0.0.1:36824). Jan 24 01:00:16.354238 containerd[1614]: time="2026-01-24T01:00:16.352118459Z" level=info msg="container event discarded" container=cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4 type=CONTAINER_CREATED_EVENT Jan 24 01:00:16.363916 containerd[1614]: time="2026-01-24T01:00:16.357922361Z" level=info msg="container event discarded" container=cb83f062cd4a17bd23c0139150bdb62dbf13743ce3aa19fbd616b8874dd1ecf4 type=CONTAINER_STARTED_EVENT Jan 24 01:00:16.427981 containerd[1614]: time="2026-01-24T01:00:16.427813711Z" level=info msg="container event discarded" container=a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4 type=CONTAINER_CREATED_EVENT Jan 24 01:00:16.427981 containerd[1614]: time="2026-01-24T01:00:16.427875405Z" level=info msg="container event discarded" container=a161c015787394811a14acc5434e6bca997b7b8c6e29d208e2f4961c4adaa1c4 type=CONTAINER_STARTED_EVENT Jan 24 01:00:16.688545 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 24 01:00:16.688671 kernel: audit: type=1101 audit(1769216416.656:1028): pid=6814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:16.656000 audit[6814]: USER_ACCT pid=6814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:16.683670 sshd-session[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:16.689499 sshd[6814]: Accepted publickey for core from 10.0.0.1 port 36824 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:16.676000 audit[6814]: CRED_ACQ pid=6814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:16.729464 systemd-logind[1597]: New session 42 of user core. Jan 24 01:00:16.757884 kernel: audit: type=1103 audit(1769216416.676:1029): pid=6814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:16.757992 kernel: audit: type=1006 audit(1769216416.676:1030): pid=6814 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 24 01:00:16.676000 audit[6814]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff251992f0 a2=3 a3=0 items=0 ppid=1 pid=6814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:16.831915 kubelet[2926]: E0124 01:00:16.826223 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 01:00:16.880234 kernel: audit: type=1300 audit(1769216416.676:1030): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff251992f0 a2=3 a3=0 items=0 ppid=1 pid=6814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:16.884561 kernel: audit: type=1327 audit(1769216416.676:1030): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:16.676000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:16.925995 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 24 01:00:16.958000 audit[6814]: USER_START pid=6814 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.075598 kernel: audit: type=1105 audit(1769216416.958:1031): pid=6814 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:16.978000 audit[6822]: CRED_ACQ pid=6822 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.126619 kernel: audit: type=1103 audit(1769216416.978:1032): pid=6822 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.622558 sshd[6822]: Connection closed by 10.0.0.1 port 36824 Jan 24 01:00:17.625600 sshd-session[6814]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:17.704495 kernel: audit: type=1106 audit(1769216417.631:1033): pid=6814 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.631000 audit[6814]: USER_END pid=6814 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.702000 audit[6814]: CRED_DISP pid=6814 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.722116 systemd[1]: sshd@40-10.0.0.105:22-10.0.0.1:36824.service: Deactivated successfully. Jan 24 01:00:17.726548 systemd-logind[1597]: Session 42 logged out. Waiting for processes to exit. Jan 24 01:00:17.738201 systemd[1]: session-42.scope: Deactivated successfully. Jan 24 01:00:17.743743 systemd-logind[1597]: Removed session 42. Jan 24 01:00:17.756979 kernel: audit: type=1104 audit(1769216417.702:1034): pid=6814 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:17.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.105:22-10.0.0.1:36824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:17.814084 kernel: audit: type=1131 audit(1769216417.719:1035): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.105:22-10.0.0.1:36824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:17.823916 kubelet[2926]: E0124 01:00:17.822591 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 01:00:18.820640 kubelet[2926]: E0124 01:00:18.812225 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 01:00:21.819169 kubelet[2926]: E0124 01:00:21.818515 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 01:00:22.694632 systemd[1]: Started sshd@41-10.0.0.105:22-10.0.0.1:49104.service - OpenSSH per-connection server daemon (10.0.0.1:49104). Jan 24 01:00:22.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.105:22-10.0.0.1:49104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:22.757648 kernel: audit: type=1130 audit(1769216422.691:1036): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.105:22-10.0.0.1:49104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:22.824891 kubelet[2926]: E0124 01:00:22.818623 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:00:23.360507 kernel: audit: type=1101 audit(1769216423.254:1037): pid=6838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.254000 audit[6838]: USER_ACCT pid=6838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.261081 sshd-session[6838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:23.361616 sshd[6838]: Accepted publickey for core from 10.0.0.1 port 49104 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:23.257000 audit[6838]: CRED_ACQ pid=6838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.423803 systemd-logind[1597]: New session 43 of user core. Jan 24 01:00:23.513758 kernel: audit: type=1103 audit(1769216423.257:1038): pid=6838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.513911 kernel: audit: type=1006 audit(1769216423.257:1039): pid=6838 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 24 01:00:23.257000 audit[6838]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca23fbd50 a2=3 a3=0 items=0 ppid=1 pid=6838 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:23.534510 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 24 01:00:23.616681 kernel: audit: type=1300 audit(1769216423.257:1039): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca23fbd50 a2=3 a3=0 items=0 ppid=1 pid=6838 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:23.616855 kernel: audit: type=1327 audit(1769216423.257:1039): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:23.257000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:23.573000 audit[6838]: USER_START pid=6838 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.744169 kernel: audit: type=1105 audit(1769216423.573:1040): pid=6838 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.578000 audit[6843]: CRED_ACQ pid=6843 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:23.829562 kernel: audit: type=1103 audit(1769216423.578:1041): pid=6843 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.294610 sshd[6843]: Connection closed by 10.0.0.1 port 49104 Jan 24 01:00:24.302942 sshd-session[6838]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:24.332000 audit[6838]: USER_END pid=6838 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.453647 kernel: audit: type=1106 audit(1769216424.332:1042): pid=6838 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.363604 systemd[1]: sshd@41-10.0.0.105:22-10.0.0.1:49104.service: Deactivated successfully. Jan 24 01:00:24.390722 systemd[1]: session-43.scope: Deactivated successfully. Jan 24 01:00:24.332000 audit[6838]: CRED_DISP pid=6838 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.455106 systemd-logind[1597]: Session 43 logged out. Waiting for processes to exit. Jan 24 01:00:24.472788 systemd-logind[1597]: Removed session 43. Jan 24 01:00:24.547604 kernel: audit: type=1104 audit(1769216424.332:1043): pid=6838 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:24.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.105:22-10.0.0.1:49104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:24.825674 kubelet[2926]: E0124 01:00:24.821862 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 01:00:25.324149 containerd[1614]: time="2026-01-24T01:00:25.323986178Z" level=info msg="container event discarded" container=234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656 type=CONTAINER_CREATED_EVENT Jan 24 01:00:26.223556 containerd[1614]: time="2026-01-24T01:00:26.190172145Z" level=info msg="container event discarded" container=234b588a417914b4a53a1056546792902761966eecdeb0c837fe9d39a1d50656 type=CONTAINER_STARTED_EVENT Jan 24 01:00:27.320557 containerd[1614]: time="2026-01-24T01:00:27.320109306Z" level=info msg="container event discarded" container=96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef type=CONTAINER_CREATED_EVENT Jan 24 01:00:28.269776 containerd[1614]: time="2026-01-24T01:00:28.269676549Z" level=info msg="container event discarded" container=96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef type=CONTAINER_STARTED_EVENT Jan 24 01:00:28.825556 kubelet[2926]: E0124 01:00:28.819897 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 01:00:28.867632 containerd[1614]: time="2026-01-24T01:00:28.864542044Z" level=info msg="container event discarded" container=96fa3a3e8280d6204355350951e9ada029119ab063193ed9ed713f353d30dcef type=CONTAINER_STOPPED_EVENT Jan 24 01:00:29.423804 systemd[1]: Started sshd@42-10.0.0.105:22-10.0.0.1:49114.service - OpenSSH per-connection server daemon (10.0.0.1:49114). Jan 24 01:00:29.493163 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:29.493538 kernel: audit: type=1130 audit(1769216429.419:1045): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.105:22-10.0.0.1:49114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:29.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.105:22-10.0.0.1:49114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:29.873651 sshd[6860]: Accepted publickey for core from 10.0.0.1 port 49114 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:29.869000 audit[6860]: USER_ACCT pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:29.906113 sshd-session[6860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:29.961363 kernel: audit: type=1101 audit(1769216429.869:1046): pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:29.886000 audit[6860]: CRED_ACQ pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:29.974840 systemd-logind[1597]: New session 44 of user core. Jan 24 01:00:30.021046 kernel: audit: type=1103 audit(1769216429.886:1047): pid=6860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:29.886000 audit[6860]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3acb2040 a2=3 a3=0 items=0 ppid=1 pid=6860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:30.114564 kernel: audit: type=1006 audit(1769216429.886:1048): pid=6860 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 24 01:00:30.114723 kernel: audit: type=1300 audit(1769216429.886:1048): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3acb2040 a2=3 a3=0 items=0 ppid=1 pid=6860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:30.114773 kernel: audit: type=1327 audit(1769216429.886:1048): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:29.886000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:30.123947 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 24 01:00:30.179000 audit[6860]: USER_START pid=6860 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.210000 audit[6864]: CRED_ACQ pid=6864 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.516482 kernel: audit: type=1105 audit(1769216430.179:1049): pid=6860 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:30.516602 kernel: audit: type=1103 audit(1769216430.210:1050): pid=6864 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.237613 sshd[6864]: Connection closed by 10.0.0.1 port 49114 Jan 24 01:00:31.241896 sshd-session[6860]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:31.321635 kernel: audit: type=1106 audit(1769216431.265:1051): pid=6860 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.265000 audit[6860]: USER_END pid=6860 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.302784 systemd[1]: sshd@42-10.0.0.105:22-10.0.0.1:49114.service: Deactivated successfully. Jan 24 01:00:31.265000 audit[6860]: CRED_DISP pid=6860 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.335861 systemd[1]: session-44.scope: Deactivated successfully. Jan 24 01:00:31.382544 systemd-logind[1597]: Session 44 logged out. Waiting for processes to exit. Jan 24 01:00:31.384881 systemd-logind[1597]: Removed session 44. Jan 24 01:00:31.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.105:22-10.0.0.1:49114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:31.403121 kernel: audit: type=1104 audit(1769216431.265:1052): pid=6860 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:31.815705 kubelet[2926]: E0124 01:00:31.815475 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 01:00:31.822745 kubelet[2926]: E0124 01:00:31.822610 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:32.965553 kubelet[2926]: E0124 01:00:32.961554 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 01:00:35.873877 kubelet[2926]: E0124 01:00:35.859064 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:00:36.418437 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:36.418791 kernel: audit: type=1130 audit(1769216436.367:1054): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.105:22-10.0.0.1:39446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:36.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.105:22-10.0.0.1:39446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:36.367601 systemd[1]: Started sshd@43-10.0.0.105:22-10.0.0.1:39446.service - OpenSSH per-connection server daemon (10.0.0.1:39446). Jan 24 01:00:36.816000 audit[6878]: USER_ACCT pid=6878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:36.824915 kubelet[2926]: E0124 01:00:36.824777 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 01:00:36.907801 sshd[6878]: Accepted publickey for core from 10.0.0.1 port 39446 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:36.919735 kernel: audit: type=1101 audit(1769216436.816:1055): pid=6878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:36.919852 kernel: audit: type=1103 audit(1769216436.916:1056): pid=6878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:36.916000 audit[6878]: CRED_ACQ pid=6878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:36.923767 sshd-session[6878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:36.962478 systemd-logind[1597]: New session 45 of user core. Jan 24 01:00:37.031604 kernel: audit: type=1006 audit(1769216436.917:1057): pid=6878 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 24 01:00:37.031743 kernel: audit: type=1300 audit(1769216436.917:1057): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfd7bc000 a2=3 a3=0 items=0 ppid=1 pid=6878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:36.917000 audit[6878]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfd7bc000 a2=3 a3=0 items=0 ppid=1 pid=6878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:36.917000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:37.157793 kernel: audit: type=1327 audit(1769216436.917:1057): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:37.151758 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 24 01:00:37.181000 audit[6878]: USER_START pid=6878 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.195000 audit[6882]: CRED_ACQ pid=6882 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.336546 kernel: audit: type=1105 audit(1769216437.181:1058): pid=6878 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.336694 kernel: audit: type=1103 audit(1769216437.195:1059): pid=6882 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.819565 sshd[6882]: Connection closed by 10.0.0.1 port 39446 Jan 24 01:00:37.823654 sshd-session[6878]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:37.850000 audit[6878]: USER_END pid=6878 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.901018 systemd[1]: sshd@43-10.0.0.105:22-10.0.0.1:39446.service: Deactivated successfully. Jan 24 01:00:37.922041 systemd[1]: session-45.scope: Deactivated successfully. Jan 24 01:00:37.935909 systemd-logind[1597]: Session 45 logged out. Waiting for processes to exit. Jan 24 01:00:37.944614 systemd-logind[1597]: Removed session 45. Jan 24 01:00:38.013570 kernel: audit: type=1106 audit(1769216437.850:1060): pid=6878 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.850000 audit[6878]: CRED_DISP pid=6878 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:38.119420 kernel: audit: type=1104 audit(1769216437.850:1061): pid=6878 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:37.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.105:22-10.0.0.1:39446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:38.815433 kubelet[2926]: E0124 01:00:38.811950 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:39.828077 kubelet[2926]: E0124 01:00:39.822722 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 01:00:40.808407 kubelet[2926]: E0124 01:00:40.807988 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:42.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.105:22-10.0.0.1:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:42.996009 systemd[1]: Started sshd@44-10.0.0.105:22-10.0.0.1:45636.service - OpenSSH per-connection server daemon (10.0.0.1:45636). Jan 24 01:00:43.031027 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:43.042050 kernel: audit: type=1130 audit(1769216442.992:1063): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.105:22-10.0.0.1:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:43.678000 audit[6895]: USER_ACCT pid=6895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.708790 sshd[6895]: Accepted publickey for core from 10.0.0.1 port 45636 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:43.715656 sshd-session[6895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:43.761572 kernel: audit: type=1101 audit(1769216443.678:1064): pid=6895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.695000 audit[6895]: CRED_ACQ pid=6895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.798616 systemd-logind[1597]: New session 46 of user core. Jan 24 01:00:43.842541 kernel: audit: type=1103 audit(1769216443.695:1065): pid=6895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.853429 kubelet[2926]: E0124 01:00:43.850614 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 01:00:43.891405 kernel: audit: type=1006 audit(1769216443.695:1066): pid=6895 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 24 01:00:43.891553 kernel: audit: type=1300 audit(1769216443.695:1066): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdab377a60 a2=3 a3=0 items=0 ppid=1 pid=6895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:43.695000 audit[6895]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdab377a60 a2=3 a3=0 items=0 ppid=1 pid=6895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:43.888625 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 24 01:00:43.695000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:43.961914 kernel: audit: type=1327 audit(1769216443.695:1066): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:43.946000 audit[6895]: USER_START pid=6895 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:43.954000 audit[6899]: CRED_ACQ pid=6899 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.130593 kernel: audit: type=1105 audit(1769216443.946:1067): pid=6895 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.130755 kernel: audit: type=1103 audit(1769216443.954:1068): pid=6899 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.466801 sshd[6899]: Connection closed by 10.0.0.1 port 45636 Jan 24 01:00:44.467904 sshd-session[6895]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:44.477000 audit[6895]: USER_END pid=6895 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.493989 systemd[1]: sshd@44-10.0.0.105:22-10.0.0.1:45636.service: Deactivated successfully. Jan 24 01:00:44.516699 kernel: audit: type=1106 audit(1769216444.477:1069): pid=6895 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.513238 systemd[1]: session-46.scope: Deactivated successfully. Jan 24 01:00:44.477000 audit[6895]: CRED_DISP pid=6895 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.559686 kernel: audit: type=1104 audit(1769216444.477:1070): pid=6895 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:44.535677 systemd-logind[1597]: Session 46 logged out. Waiting for processes to exit. Jan 24 01:00:44.541846 systemd-logind[1597]: Removed session 46. Jan 24 01:00:44.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.105:22-10.0.0.1:45636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:44.936624 containerd[1614]: time="2026-01-24T01:00:44.935739853Z" level=info msg="container event discarded" container=8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24 type=CONTAINER_CREATED_EVENT Jan 24 01:00:46.240870 containerd[1614]: time="2026-01-24T01:00:46.240783331Z" level=info msg="container event discarded" container=8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24 type=CONTAINER_STARTED_EVENT Jan 24 01:00:46.823056 kubelet[2926]: E0124 01:00:46.822921 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 01:00:46.824083 kubelet[2926]: E0124 01:00:46.823743 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:00:48.060998 kubelet[2926]: E0124 01:00:48.060732 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 01:00:49.636224 systemd[1]: Started sshd@45-10.0.0.105:22-10.0.0.1:45638.service - OpenSSH per-connection server daemon (10.0.0.1:45638). Jan 24 01:00:49.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.105:22-10.0.0.1:45638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:49.677572 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:49.677676 kernel: audit: type=1130 audit(1769216449.636:1072): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.105:22-10.0.0.1:45638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:50.312000 audit[6937]: USER_ACCT pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:50.333710 sshd[6937]: Accepted publickey for core from 10.0.0.1 port 45638 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:50.341849 sshd-session[6937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:50.423664 kernel: audit: type=1101 audit(1769216450.312:1073): pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:50.423799 kernel: audit: type=1103 audit(1769216450.332:1074): pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:50.332000 audit[6937]: CRED_ACQ pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:50.496527 kernel: audit: type=1006 audit(1769216450.335:1075): pid=6937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 24 01:00:50.506859 kernel: audit: type=1300 audit(1769216450.335:1075): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0bcb6610 a2=3 a3=0 items=0 ppid=1 pid=6937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:50.335000 audit[6937]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0bcb6610 a2=3 a3=0 items=0 ppid=1 pid=6937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:50.525525 systemd-logind[1597]: New session 47 of user core. Jan 24 01:00:50.335000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:50.596964 kernel: audit: type=1327 audit(1769216450.335:1075): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:50.617692 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 24 01:00:50.720000 audit[6937]: USER_START pid=6937 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:50.814438 kernel: audit: type=1105 audit(1769216450.720:1076): pid=6937 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:50.756000 audit[6941]: CRED_ACQ pid=6941 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:50.916614 kernel: audit: type=1103 audit(1769216450.756:1077): pid=6941 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:51.310394 containerd[1614]: time="2026-01-24T01:00:51.307977652Z" level=info msg="container event discarded" container=8fea6c93b4bfbbb131ff895bbf7be05a48c8391bf40f3df4640aa88c50751b24 type=CONTAINER_STOPPED_EVENT Jan 24 01:00:51.871217 kubelet[2926]: E0124 01:00:51.862496 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:51.886395 kubelet[2926]: E0124 01:00:51.869915 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 01:00:52.114432 sshd[6941]: Connection closed by 10.0.0.1 port 45638 Jan 24 01:00:52.118732 sshd-session[6937]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:52.150000 audit[6937]: USER_END pid=6937 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.214026 systemd[1]: sshd@45-10.0.0.105:22-10.0.0.1:45638.service: Deactivated successfully. Jan 24 01:00:52.227588 systemd[1]: session-47.scope: Deactivated successfully. Jan 24 01:00:52.263219 systemd-logind[1597]: Session 47 logged out. Waiting for processes to exit. Jan 24 01:00:52.270900 systemd-logind[1597]: Removed session 47. Jan 24 01:00:52.293549 kernel: audit: type=1106 audit(1769216452.150:1078): pid=6937 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.293698 kernel: audit: type=1104 audit(1769216452.150:1079): pid=6937 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.150000 audit[6937]: CRED_DISP pid=6937 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:52.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.105:22-10.0.0.1:45638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:52.812475 kubelet[2926]: E0124 01:00:52.810494 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:53.864834 kubelet[2926]: E0124 01:00:53.864785 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 01:00:56.812912 kubelet[2926]: E0124 01:00:56.811059 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:56.816754 kubelet[2926]: E0124 01:00:56.816667 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 01:00:57.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.105:22-10.0.0.1:48098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:57.178771 systemd[1]: Started sshd@46-10.0.0.105:22-10.0.0.1:48098.service - OpenSSH per-connection server daemon (10.0.0.1:48098). Jan 24 01:00:57.251912 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:00:57.252056 kernel: audit: type=1130 audit(1769216457.178:1081): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.105:22-10.0.0.1:48098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:00:57.698000 audit[6956]: USER_ACCT pid=6956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.750857 systemd-logind[1597]: New session 48 of user core. Jan 24 01:00:57.708642 sshd-session[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:57.762744 sshd[6956]: Accepted publickey for core from 10.0.0.1 port 48098 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:00:57.792408 kernel: audit: type=1101 audit(1769216457.698:1082): pid=6956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.703000 audit[6956]: CRED_ACQ pid=6956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.794828 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 24 01:00:57.876436 kernel: audit: type=1103 audit(1769216457.703:1083): pid=6956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.945001 kernel: audit: type=1006 audit(1769216457.703:1084): pid=6956 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Jan 24 01:00:57.960228 kubelet[2926]: E0124 01:00:57.960169 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:00:57.703000 audit[6956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7b072450 a2=3 a3=0 items=0 ppid=1 pid=6956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:58.090546 kernel: audit: type=1300 audit(1769216457.703:1084): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7b072450 a2=3 a3=0 items=0 ppid=1 pid=6956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:00:57.703000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:58.111986 kernel: audit: type=1327 audit(1769216457.703:1084): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:00:58.116463 kernel: audit: type=1105 audit(1769216457.824:1085): pid=6956 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.824000 audit[6956]: USER_START pid=6956 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:57.849000 audit[6960]: CRED_ACQ pid=6960 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.255497 kernel: audit: type=1103 audit(1769216457.849:1086): pid=6960 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.776736 sshd[6960]: Connection closed by 10.0.0.1 port 48098 Jan 24 01:00:58.800174 sshd-session[6956]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:58.802000 audit[6956]: USER_END pid=6956 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.826604 systemd[1]: sshd@46-10.0.0.105:22-10.0.0.1:48098.service: Deactivated successfully. Jan 24 01:00:58.834936 systemd[1]: session-48.scope: Deactivated successfully. Jan 24 01:00:58.842576 systemd-logind[1597]: Session 48 logged out. Waiting for processes to exit. Jan 24 01:00:58.855975 systemd-logind[1597]: Removed session 48. Jan 24 01:00:58.804000 audit[6956]: CRED_DISP pid=6956 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.943401 kernel: audit: type=1106 audit(1769216458.802:1087): pid=6956 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.943550 kernel: audit: type=1104 audit(1769216458.804:1088): pid=6956 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:00:58.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.105:22-10.0.0.1:48098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:01.872185 kubelet[2926]: E0124 01:01:01.870813 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:01.885020 kubelet[2926]: E0124 01:01:01.884967 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 01:01:01.900979 kubelet[2926]: E0124 01:01:01.900903 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 01:01:03.849696 systemd[1]: Started sshd@47-10.0.0.105:22-10.0.0.1:45236.service - OpenSSH per-connection server daemon (10.0.0.1:45236). Jan 24 01:01:03.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.105:22-10.0.0.1:45236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:03.875579 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:03.875807 kernel: audit: type=1130 audit(1769216463.851:1090): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.105:22-10.0.0.1:45236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:04.243000 audit[6973]: USER_ACCT pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.251657 sshd[6973]: Accepted publickey for core from 10.0.0.1 port 45236 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:01:04.254191 sshd-session[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:04.286906 systemd-logind[1597]: New session 49 of user core. Jan 24 01:01:04.334800 kernel: audit: type=1101 audit(1769216464.243:1091): pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.334958 kernel: audit: type=1103 audit(1769216464.248:1092): pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.248000 audit[6973]: CRED_ACQ pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.377931 kernel: audit: type=1006 audit(1769216464.248:1093): pid=6973 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jan 24 01:01:04.382512 kernel: audit: type=1300 audit(1769216464.248:1093): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe62140cd0 a2=3 a3=0 items=0 ppid=1 pid=6973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:04.248000 audit[6973]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe62140cd0 a2=3 a3=0 items=0 ppid=1 pid=6973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:04.373547 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 24 01:01:04.248000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:04.420916 kernel: audit: type=1327 audit(1769216464.248:1093): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:04.437000 audit[6973]: USER_START pid=6973 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.485992 kernel: audit: type=1105 audit(1769216464.437:1094): pid=6973 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.469000 audit[6977]: CRED_ACQ pid=6977 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.519507 kernel: audit: type=1103 audit(1769216464.469:1095): pid=6977 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.757999 sshd[6977]: Connection closed by 10.0.0.1 port 45236 Jan 24 01:01:04.759568 sshd-session[6973]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:04.761000 audit[6973]: USER_END pid=6973 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.770450 systemd[1]: sshd@47-10.0.0.105:22-10.0.0.1:45236.service: Deactivated successfully. Jan 24 01:01:04.788463 systemd[1]: session-49.scope: Deactivated successfully. Jan 24 01:01:04.796543 systemd-logind[1597]: Session 49 logged out. Waiting for processes to exit. Jan 24 01:01:04.802841 systemd-logind[1597]: Removed session 49. Jan 24 01:01:04.810670 kernel: audit: type=1106 audit(1769216464.761:1096): pid=6973 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.761000 audit[6973]: CRED_DISP pid=6973 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.847525 kernel: audit: type=1104 audit(1769216464.761:1097): pid=6973 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:04.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.105:22-10.0.0.1:45236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:05.814929 kubelet[2926]: E0124 01:01:05.813532 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 01:01:07.822899 kubelet[2926]: E0124 01:01:07.822655 2926 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:01:07.826365 kubelet[2926]: E0124 01:01:07.826141 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 01:01:08.810138 kubelet[2926]: E0124 01:01:08.809998 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 01:01:08.910000 audit[6991]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=6991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:08.919426 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:08.919612 kernel: audit: type=1325 audit(1769216468.910:1099): table=filter:150 family=2 entries=26 op=nft_register_rule pid=6991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:08.910000 audit[6991]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdbc4f7f80 a2=0 a3=7ffdbc4f7f6c items=0 ppid=3030 pid=6991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:08.957592 kernel: audit: type=1300 audit(1769216468.910:1099): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdbc4f7f80 a2=0 a3=7ffdbc4f7f6c items=0 ppid=3030 pid=6991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:08.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:08.965000 audit[6991]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=6991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:08.982609 kernel: audit: type=1327 audit(1769216468.910:1099): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:08.982700 kernel: audit: type=1325 audit(1769216468.965:1100): table=nat:151 family=2 entries=104 op=nft_register_chain pid=6991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 01:01:08.982748 kernel: audit: type=1300 audit(1769216468.965:1100): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdbc4f7f80 a2=0 a3=7ffdbc4f7f6c items=0 ppid=3030 pid=6991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:08.965000 audit[6991]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdbc4f7f80 a2=0 a3=7ffdbc4f7f6c items=0 ppid=3030 pid=6991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:08.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:09.014472 kernel: audit: type=1327 audit(1769216468.965:1100): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 01:01:09.791483 systemd[1]: Started sshd@48-10.0.0.105:22-10.0.0.1:45248.service - OpenSSH per-connection server daemon (10.0.0.1:45248). Jan 24 01:01:09.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.105:22-10.0.0.1:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:09.824145 kernel: audit: type=1130 audit(1769216469.791:1101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.105:22-10.0.0.1:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:09.926752 sshd[6993]: Accepted publickey for core from 10.0.0.1 port 45248 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:01:09.926000 audit[6993]: USER_ACCT pid=6993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:09.946164 kernel: audit: type=1101 audit(1769216469.926:1102): pid=6993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:09.946382 kernel: audit: type=1103 audit(1769216469.945:1103): pid=6993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:09.945000 audit[6993]: CRED_ACQ pid=6993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:09.947705 sshd-session[6993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:09.959476 systemd-logind[1597]: New session 50 of user core. Jan 24 01:01:09.963924 kernel: audit: type=1006 audit(1769216469.945:1104): pid=6993 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 24 01:01:09.945000 audit[6993]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4c3f3c10 a2=3 a3=0 items=0 ppid=1 pid=6993 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:09.945000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:09.976428 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 24 01:01:09.993000 audit[6993]: USER_START pid=6993 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:09.997000 audit[6997]: CRED_ACQ pid=6997 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.163124 sshd[6997]: Connection closed by 10.0.0.1 port 45248 Jan 24 01:01:10.163819 sshd-session[6993]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:10.168000 audit[6993]: USER_END pid=6993 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.169000 audit[6993]: CRED_DISP pid=6993 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:10.173891 systemd[1]: sshd@48-10.0.0.105:22-10.0.0.1:45248.service: Deactivated successfully. Jan 24 01:01:10.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.105:22-10.0.0.1:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:10.179703 systemd[1]: session-50.scope: Deactivated successfully. Jan 24 01:01:10.183426 systemd-logind[1597]: Session 50 logged out. Waiting for processes to exit. Jan 24 01:01:10.187744 systemd-logind[1597]: Removed session 50. Jan 24 01:01:10.809162 kubelet[2926]: E0124 01:01:10.809100 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:01:13.814231 kubelet[2926]: E0124 01:01:13.814124 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 01:01:14.809619 kubelet[2926]: E0124 01:01:14.809237 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a" Jan 24 01:01:15.195023 systemd[1]: Started sshd@49-10.0.0.105:22-10.0.0.1:49706.service - OpenSSH per-connection server daemon (10.0.0.1:49706). Jan 24 01:01:15.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.105:22-10.0.0.1:49706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:15.203844 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 01:01:15.203958 kernel: audit: type=1130 audit(1769216475.196:1110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.105:22-10.0.0.1:49706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:15.493000 audit[7036]: USER_ACCT pid=7036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.494956 sshd[7036]: Accepted publickey for core from 10.0.0.1 port 49706 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:01:15.500179 sshd-session[7036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:15.532229 kernel: audit: type=1101 audit(1769216475.493:1111): pid=7036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.532505 kernel: audit: type=1103 audit(1769216475.497:1112): pid=7036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.497000 audit[7036]: CRED_ACQ pid=7036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.521762 systemd-logind[1597]: New session 51 of user core. Jan 24 01:01:15.497000 audit[7036]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcba1d41b0 a2=3 a3=0 items=0 ppid=1 pid=7036 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:15.562364 kernel: audit: type=1006 audit(1769216475.497:1113): pid=7036 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jan 24 01:01:15.562536 kernel: audit: type=1300 audit(1769216475.497:1113): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcba1d41b0 a2=3 a3=0 items=0 ppid=1 pid=7036 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:15.497000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:15.571427 kernel: audit: type=1327 audit(1769216475.497:1113): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:15.572603 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 24 01:01:15.579000 audit[7036]: USER_START pid=7036 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.634573 kernel: audit: type=1105 audit(1769216475.579:1114): pid=7036 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.634983 kernel: audit: type=1103 audit(1769216475.591:1115): pid=7040 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.591000 audit[7040]: CRED_ACQ pid=7040 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.917787 sshd[7040]: Connection closed by 10.0.0.1 port 49706 Jan 24 01:01:15.922416 sshd-session[7036]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:15.925000 audit[7036]: USER_END pid=7036 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.935593 systemd[1]: sshd@49-10.0.0.105:22-10.0.0.1:49706.service: Deactivated successfully. Jan 24 01:01:15.936814 systemd-logind[1597]: Session 51 logged out. Waiting for processes to exit. Jan 24 01:01:15.942717 systemd[1]: session-51.scope: Deactivated successfully. Jan 24 01:01:15.953022 systemd-logind[1597]: Removed session 51. Jan 24 01:01:15.925000 audit[7036]: CRED_DISP pid=7036 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.981527 kernel: audit: type=1106 audit(1769216475.925:1116): pid=7036 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.981667 kernel: audit: type=1104 audit(1769216475.925:1117): pid=7036 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:15.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.105:22-10.0.0.1:49706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:20.816572 kubelet[2926]: E0124 01:01:20.813889 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797c85599-jzs5d" podUID="1f065d32-b76b-4b45-b859-d08ade23f4a2" Jan 24 01:01:20.838551 kubelet[2926]: E0124 01:01:20.838483 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-htrd2" podUID="1351988d-2da1-448e-bfda-fb7490691684" Jan 24 01:01:20.990726 systemd[1]: Started sshd@50-10.0.0.105:22-10.0.0.1:49720.service - OpenSSH per-connection server daemon (10.0.0.1:49720). Jan 24 01:01:21.069201 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:21.069485 kernel: audit: type=1130 audit(1769216480.990:1119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.105:22-10.0.0.1:49720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:20.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.105:22-10.0.0.1:49720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:21.260000 audit[7056]: USER_ACCT pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.285724 sshd[7056]: Accepted publickey for core from 10.0.0.1 port 49720 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:01:21.286814 sshd-session[7056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:21.271000 audit[7056]: CRED_ACQ pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.326455 systemd-logind[1597]: New session 52 of user core. Jan 24 01:01:21.333718 kernel: audit: type=1101 audit(1769216481.260:1120): pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.333807 kernel: audit: type=1103 audit(1769216481.271:1121): pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.353485 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 24 01:01:21.271000 audit[7056]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3aa18e20 a2=3 a3=0 items=0 ppid=1 pid=7056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:21.411230 kernel: audit: type=1006 audit(1769216481.271:1122): pid=7056 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 24 01:01:21.411521 kernel: audit: type=1300 audit(1769216481.271:1122): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3aa18e20 a2=3 a3=0 items=0 ppid=1 pid=7056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:21.411585 kernel: audit: type=1327 audit(1769216481.271:1122): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:21.271000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:21.424465 kernel: audit: type=1105 audit(1769216481.369:1123): pid=7056 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.369000 audit[7056]: USER_START pid=7056 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.385000 audit[7060]: CRED_ACQ pid=7060 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.530506 kernel: audit: type=1103 audit(1769216481.385:1124): pid=7060 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.819794 kubelet[2926]: E0124 01:01:21.819712 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-8vwmm" podUID="bf242984-56a4-4914-9f0b-44fbe621897e" Jan 24 01:01:21.819794 kubelet[2926]: E0124 01:01:21.819726 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5tr2c" podUID="c89836fa-dd95-4cb6-925a-be9fc6a96ed3" Jan 24 01:01:21.983612 sshd[7060]: Connection closed by 10.0.0.1 port 49720 Jan 24 01:01:21.985647 sshd-session[7056]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:21.992000 audit[7056]: USER_END pid=7056 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.999537 systemd-logind[1597]: Session 52 logged out. Waiting for processes to exit. Jan 24 01:01:22.003807 systemd[1]: sshd@50-10.0.0.105:22-10.0.0.1:49720.service: Deactivated successfully. Jan 24 01:01:22.011812 systemd[1]: session-52.scope: Deactivated successfully. Jan 24 01:01:22.021727 systemd-logind[1597]: Removed session 52. Jan 24 01:01:22.023497 kernel: audit: type=1106 audit(1769216481.992:1125): pid=7056 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.023588 kernel: audit: type=1104 audit(1769216481.993:1126): pid=7056 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:21.993000 audit[7056]: CRED_DISP pid=7056 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:22.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.105:22-10.0.0.1:49720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:27.027451 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 01:01:27.029435 kernel: audit: type=1130 audit(1769216487.020:1128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.105:22-10.0.0.1:54880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:27.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.105:22-10.0.0.1:54880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:27.020966 systemd[1]: Started sshd@51-10.0.0.105:22-10.0.0.1:54880.service - OpenSSH per-connection server daemon (10.0.0.1:54880). Jan 24 01:01:27.197788 sshd[7076]: Accepted publickey for core from 10.0.0.1 port 54880 ssh2: RSA SHA256:3vbvf+o2T3Klr2xTjn5OF6caMiJSB4v/VBYBcaVmWRo Jan 24 01:01:27.196000 audit[7076]: USER_ACCT pid=7076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.212377 sshd-session[7076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:01:27.238399 kernel: audit: type=1101 audit(1769216487.196:1129): pid=7076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.238542 kernel: audit: type=1103 audit(1769216487.208:1130): pid=7076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.208000 audit[7076]: CRED_ACQ pid=7076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.253612 systemd-logind[1597]: New session 53 of user core. Jan 24 01:01:27.296391 kernel: audit: type=1006 audit(1769216487.208:1131): pid=7076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Jan 24 01:01:27.208000 audit[7076]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff31529090 a2=3 a3=0 items=0 ppid=1 pid=7076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:27.361856 kernel: audit: type=1300 audit(1769216487.208:1131): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff31529090 a2=3 a3=0 items=0 ppid=1 pid=7076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 01:01:27.362812 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 24 01:01:27.208000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:27.375773 kernel: audit: type=1327 audit(1769216487.208:1131): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 01:01:27.377000 audit[7076]: USER_START pid=7076 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.381000 audit[7080]: CRED_ACQ pid=7080 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.426874 kernel: audit: type=1105 audit(1769216487.377:1132): pid=7076 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.427013 kernel: audit: type=1103 audit(1769216487.381:1133): pid=7080 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.642683 sshd[7080]: Connection closed by 10.0.0.1 port 54880 Jan 24 01:01:27.647000 sshd-session[7076]: pam_unix(sshd:session): session closed for user core Jan 24 01:01:27.661000 audit[7076]: USER_END pid=7076 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.673752 systemd[1]: sshd@51-10.0.0.105:22-10.0.0.1:54880.service: Deactivated successfully. Jan 24 01:01:27.675389 systemd-logind[1597]: Session 53 logged out. Waiting for processes to exit. Jan 24 01:01:27.685680 systemd[1]: session-53.scope: Deactivated successfully. Jan 24 01:01:27.696165 systemd-logind[1597]: Removed session 53. Jan 24 01:01:27.663000 audit[7076]: CRED_DISP pid=7076 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.749880 kernel: audit: type=1106 audit(1769216487.661:1134): pid=7076 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.750123 kernel: audit: type=1104 audit(1769216487.663:1135): pid=7076 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 24 01:01:27.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.105:22-10.0.0.1:54880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 01:01:27.817412 kubelet[2926]: E0124 01:01:27.817350 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf8bbd55-mfbxj" podUID="e539ab3d-80aa-4b97-836f-149823e6c41d" Jan 24 01:01:28.817867 kubelet[2926]: E0124 01:01:28.809521 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b956dc89b-tb4xl" podUID="12fcbf43-6c31-4160-9172-b8eee7f25a4a"