Jan 14 01:05:32.118899 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:15:29 -00 2026 Jan 14 01:05:32.118927 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:05:32.118940 kernel: BIOS-provided physical RAM map: Jan 14 01:05:32.118947 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 01:05:32.118953 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 01:05:32.118960 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 01:05:32.118967 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 14 01:05:32.118973 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 14 01:05:32.119091 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 01:05:32.119105 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 01:05:32.119119 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:05:32.119130 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 01:05:32.119141 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 01:05:32.119153 kernel: NX (Execute Disable) protection: active Jan 14 01:05:32.119161 kernel: APIC: Static calls initialized Jan 14 01:05:32.119172 kernel: SMBIOS 2.8 present. Jan 14 01:05:32.119270 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 14 01:05:32.119278 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:05:32.119285 kernel: Hypervisor detected: KVM Jan 14 01:05:32.119292 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 01:05:32.119299 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:05:32.119306 kernel: kvm-clock: using sched offset of 15048070345 cycles Jan 14 01:05:32.119314 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:05:32.119322 kernel: tsc: Detected 2445.426 MHz processor Jan 14 01:05:32.119333 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:05:32.119340 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:05:32.119348 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 01:05:32.119355 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 01:05:32.119362 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:05:32.119468 kernel: Using GB pages for direct mapping Jan 14 01:05:32.119476 kernel: ACPI: Early table checksum verification disabled Jan 14 01:05:32.119487 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 14 01:05:32.119495 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:05:32.119502 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:05:32.119510 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:05:32.119517 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 14 01:05:32.119524 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:05:32.119532 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:05:32.119541 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:05:32.119553 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:05:32.119764 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 14 01:05:32.119784 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 14 01:05:32.119799 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 14 01:05:32.119812 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 14 01:05:32.119831 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 14 01:05:32.119844 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 14 01:05:32.119856 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 14 01:05:32.119869 kernel: No NUMA configuration found Jan 14 01:05:32.119881 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 14 01:05:32.119893 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 14 01:05:32.119911 kernel: Zone ranges: Jan 14 01:05:32.119924 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:05:32.119937 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 14 01:05:32.119949 kernel: Normal empty Jan 14 01:05:32.119962 kernel: Device empty Jan 14 01:05:32.119975 kernel: Movable zone start for each node Jan 14 01:05:32.119988 kernel: Early memory node ranges Jan 14 01:05:32.120002 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 01:05:32.120021 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 14 01:05:32.120035 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 14 01:05:32.120049 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:05:32.120062 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 01:05:32.120182 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 14 01:05:32.120196 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 01:05:32.120206 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:05:32.120225 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:05:32.120239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 01:05:32.120334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:05:32.120342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:05:32.120350 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:05:32.120358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:05:32.120463 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:05:32.120477 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 01:05:32.120484 kernel: TSC deadline timer available Jan 14 01:05:32.120493 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:05:32.120501 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:05:32.120508 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:05:32.120516 kernel: CPU topo: Max. threads per core: 1 Jan 14 01:05:32.120523 kernel: CPU topo: Num. cores per package: 4 Jan 14 01:05:32.120534 kernel: CPU topo: Num. threads per package: 4 Jan 14 01:05:32.120541 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 01:05:32.120549 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:05:32.120557 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 01:05:32.120564 kernel: kvm-guest: setup PV sched yield Jan 14 01:05:32.120729 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 01:05:32.120738 kernel: Booting paravirtualized kernel on KVM Jan 14 01:05:32.120746 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:05:32.120758 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 01:05:32.120766 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 01:05:32.120773 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 01:05:32.120781 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 01:05:32.120789 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:05:32.120797 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:05:32.120805 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:05:32.120816 kernel: random: crng init done Jan 14 01:05:32.120824 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:05:32.120832 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:05:32.120839 kernel: Fallback order for Node 0: 0 Jan 14 01:05:32.120847 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 14 01:05:32.120855 kernel: Policy zone: DMA32 Jan 14 01:05:32.120863 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:05:32.120873 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 01:05:32.120880 kernel: ftrace: allocating 40097 entries in 157 pages Jan 14 01:05:32.120888 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:05:32.120896 kernel: Dynamic Preempt: voluntary Jan 14 01:05:32.120904 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:05:32.120912 kernel: rcu: RCU event tracing is enabled. Jan 14 01:05:32.120920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 01:05:32.120931 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:05:32.121025 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:05:32.121038 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:05:32.121049 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:05:32.121059 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 01:05:32.121069 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:05:32.121083 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:05:32.121101 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:05:32.121112 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 01:05:32.121123 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:05:32.121148 kernel: Console: colour VGA+ 80x25 Jan 14 01:05:32.121166 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:05:32.121175 kernel: ACPI: Core revision 20240827 Jan 14 01:05:32.121183 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 01:05:32.121191 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:05:32.121199 kernel: x2apic enabled Jan 14 01:05:32.121206 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:05:32.121311 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 01:05:32.121327 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 01:05:32.121341 kernel: kvm-guest: setup PV IPIs Jan 14 01:05:32.121350 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 01:05:32.121362 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 01:05:32.121464 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 01:05:32.121472 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:05:32.121480 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 01:05:32.121489 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 01:05:32.121498 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:05:32.121505 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:05:32.121517 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:05:32.121525 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:05:32.121533 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 01:05:32.121542 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 01:05:32.121550 kernel: active return thunk: srso_alias_return_thunk Jan 14 01:05:32.121558 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 01:05:32.121568 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 01:05:32.121736 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:05:32.121744 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:05:32.121752 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:05:32.121760 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:05:32.121768 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:05:32.121776 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 01:05:32.121788 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:05:32.121796 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:05:32.121804 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:05:32.121812 kernel: landlock: Up and running. Jan 14 01:05:32.121820 kernel: SELinux: Initializing. Jan 14 01:05:32.121828 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:05:32.121836 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:05:32.121947 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 01:05:32.121966 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 01:05:32.121976 kernel: signal: max sigframe size: 1776 Jan 14 01:05:32.121985 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:05:32.121993 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:05:32.122006 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:05:32.122021 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:05:32.122038 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:05:32.122049 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:05:32.122060 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 01:05:32.122072 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 01:05:32.122086 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 01:05:32.122099 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 120524K reserved, 0K cma-reserved) Jan 14 01:05:32.122110 kernel: devtmpfs: initialized Jan 14 01:05:32.122121 kernel: x86/mm: Memory block size: 128MB Jan 14 01:05:32.122140 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:05:32.122153 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 01:05:32.122165 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:05:32.122173 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:05:32.122181 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:05:32.122189 kernel: audit: type=2000 audit(1768352691.096:1): state=initialized audit_enabled=0 res=1 Jan 14 01:05:32.122197 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:05:32.122208 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:05:32.122216 kernel: cpuidle: using governor menu Jan 14 01:05:32.122224 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:05:32.122232 kernel: dca service started, version 1.12.1 Jan 14 01:05:32.122240 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 01:05:32.122248 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 01:05:32.122256 kernel: PCI: Using configuration type 1 for base access Jan 14 01:05:32.122267 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:05:32.122275 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:05:32.122283 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:05:32.122291 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:05:32.122299 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:05:32.122307 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:05:32.122315 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:05:32.122325 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:05:32.122333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:05:32.122341 kernel: ACPI: Interpreter enabled Jan 14 01:05:32.122349 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 01:05:32.122362 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:05:32.122489 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:05:32.122501 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:05:32.122520 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 01:05:32.122531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:05:32.123306 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:05:32.124787 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 01:05:32.125082 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 01:05:32.125102 kernel: PCI host bridge to bus 0000:00 Jan 14 01:05:32.125974 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:05:32.126236 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:05:32.127058 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:05:32.128021 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 14 01:05:32.128311 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 01:05:32.128905 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 14 01:05:32.130185 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:05:32.130566 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:05:32.131012 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:05:32.131284 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 01:05:32.131834 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 01:05:32.132123 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 01:05:32.132539 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:05:32.133008 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 27343 usecs Jan 14 01:05:32.133287 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 01:05:32.133793 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 01:05:32.134014 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 01:05:32.134277 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 01:05:32.134874 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:05:32.135120 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 14 01:05:32.135542 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 01:05:32.135993 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 01:05:32.136287 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:05:32.136831 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 14 01:05:32.137102 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 14 01:05:32.137478 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 14 01:05:32.137919 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 01:05:32.138199 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:05:32.138768 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 01:05:32.139031 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 29296 usecs Jan 14 01:05:32.139321 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 01:05:32.139869 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 14 01:05:32.140133 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 14 01:05:32.140527 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 01:05:32.140962 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 01:05:32.140980 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:05:32.140993 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:05:32.141005 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:05:32.141017 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:05:32.141028 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 01:05:32.141046 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 01:05:32.141058 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 01:05:32.141069 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 01:05:32.141082 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 01:05:32.141093 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 01:05:32.141105 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 01:05:32.141118 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 01:05:32.141137 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 01:05:32.141148 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 01:05:32.141160 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 01:05:32.141170 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 01:05:32.141181 kernel: iommu: Default domain type: Translated Jan 14 01:05:32.141196 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:05:32.141209 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:05:32.141225 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:05:32.141236 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 01:05:32.141248 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 14 01:05:32.141802 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 01:05:32.142060 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 01:05:32.142334 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:05:32.142352 kernel: vgaarb: loaded Jan 14 01:05:32.142489 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 01:05:32.142502 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 01:05:32.142513 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:05:32.142525 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:05:32.142537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:05:32.142549 kernel: pnp: PnP ACPI init Jan 14 01:05:32.142996 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 01:05:32.143024 kernel: pnp: PnP ACPI: found 6 devices Jan 14 01:05:32.143039 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:05:32.143051 kernel: NET: Registered PF_INET protocol family Jan 14 01:05:32.143062 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:05:32.143073 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 01:05:32.143085 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:05:32.143098 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:05:32.143114 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 01:05:32.143127 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 01:05:32.143141 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:05:32.143153 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:05:32.143164 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:05:32.143175 kernel: NET: Registered PF_XDP protocol family Jan 14 01:05:32.143537 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:05:32.143970 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:05:32.144222 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:05:32.144771 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 14 01:05:32.145007 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 01:05:32.145204 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 14 01:05:32.145214 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:05:32.145228 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 01:05:32.145237 kernel: Initialise system trusted keyrings Jan 14 01:05:32.145245 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 01:05:32.145253 kernel: Key type asymmetric registered Jan 14 01:05:32.145261 kernel: Asymmetric key parser 'x509' registered Jan 14 01:05:32.145268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:05:32.145277 kernel: io scheduler mq-deadline registered Jan 14 01:05:32.145287 kernel: io scheduler kyber registered Jan 14 01:05:32.145295 kernel: io scheduler bfq registered Jan 14 01:05:32.145306 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:05:32.145322 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 01:05:32.145336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 01:05:32.145348 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 01:05:32.145359 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:05:32.145477 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:05:32.145494 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:05:32.145508 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:05:32.145522 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:05:32.145534 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:05:32.145991 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 01:05:32.146195 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 01:05:32.146517 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T01:05:08 UTC (1768352708) Jan 14 01:05:32.146905 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 01:05:32.146919 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 01:05:32.146927 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:05:32.146936 kernel: Segment Routing with IPv6 Jan 14 01:05:32.146944 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:05:32.146952 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:05:32.146965 kernel: Key type dns_resolver registered Jan 14 01:05:32.146974 kernel: IPI shorthand broadcast: enabled Jan 14 01:05:32.146982 kernel: sched_clock: Marking stable (15034180200, 1343115585)->(18467455285, -2090159500) Jan 14 01:05:32.146990 kernel: registered taskstats version 1 Jan 14 01:05:32.146998 kernel: Loading compiled-in X.509 certificates Jan 14 01:05:32.147006 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 58a78462583b088d099087e6f2d97e37d80e06bb' Jan 14 01:05:32.147015 kernel: Demotion targets for Node 0: null Jan 14 01:05:32.147025 kernel: Key type .fscrypt registered Jan 14 01:05:32.147033 kernel: Key type fscrypt-provisioning registered Jan 14 01:05:32.147041 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:05:32.147050 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:05:32.147058 kernel: ima: No architecture policies found Jan 14 01:05:32.147065 kernel: clk: Disabling unused clocks Jan 14 01:05:32.147073 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:05:32.147084 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:05:32.147092 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 14 01:05:32.147100 kernel: Run /init as init process Jan 14 01:05:32.147108 kernel: with arguments: Jan 14 01:05:32.147117 kernel: /init Jan 14 01:05:32.147125 kernel: with environment: Jan 14 01:05:32.147133 kernel: HOME=/ Jan 14 01:05:32.147143 kernel: TERM=linux Jan 14 01:05:32.147151 kernel: SCSI subsystem initialized Jan 14 01:05:32.147159 kernel: libata version 3.00 loaded. Jan 14 01:05:32.147475 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 01:05:32.147491 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 01:05:32.148288 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 01:05:32.149173 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 01:05:32.150257 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 01:05:32.150873 kernel: scsi host0: ahci Jan 14 01:05:32.151104 kernel: scsi host1: ahci Jan 14 01:05:32.151870 kernel: scsi host2: ahci Jan 14 01:05:32.152106 kernel: scsi host3: ahci Jan 14 01:05:32.152336 kernel: scsi host4: ahci Jan 14 01:05:32.152850 kernel: scsi host5: ahci Jan 14 01:05:32.152866 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 14 01:05:32.152875 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 14 01:05:32.152885 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 14 01:05:32.152893 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 14 01:05:32.152907 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 14 01:05:32.152915 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 14 01:05:32.152924 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 01:05:32.152932 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 01:05:32.152940 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 01:05:32.152948 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 01:05:32.152957 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 01:05:32.152968 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 01:05:32.152976 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:05:32.152984 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 01:05:32.152993 kernel: ata3.00: applying bridge limits Jan 14 01:05:32.153001 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:05:32.153009 kernel: ata3.00: configured for UDMA/100 Jan 14 01:05:32.153493 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 01:05:32.153933 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 01:05:32.154171 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 01:05:32.154182 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 01:05:32.154515 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 01:05:32.155026 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 01:05:32.155048 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:05:32.155072 kernel: GPT:16515071 != 27000831 Jan 14 01:05:32.155086 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:05:32.155100 kernel: GPT:16515071 != 27000831 Jan 14 01:05:32.155115 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:05:32.155130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 01:05:32.155144 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:05:32.155159 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:05:32.155180 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:05:32.155196 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:05:32.155210 kernel: raid6: avx2x4 gen() 5465 MB/s Jan 14 01:05:32.155221 kernel: raid6: avx2x2 gen() 5771 MB/s Jan 14 01:05:32.155232 kernel: raid6: avx2x1 gen() 6049 MB/s Jan 14 01:05:32.155247 kernel: raid6: using algorithm avx2x1 gen() 6049 MB/s Jan 14 01:05:32.155265 kernel: raid6: .... xor() 9643 MB/s, rmw enabled Jan 14 01:05:32.155280 kernel: raid6: using avx2x2 recovery algorithm Jan 14 01:05:32.155293 kernel: xor: automatically using best checksumming function avx Jan 14 01:05:32.155312 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:05:32.155326 kernel: BTRFS: device fsid 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac devid 1 transid 33 /dev/mapper/usr (253:0) scanned by mount (180) Jan 14 01:05:32.155342 kernel: BTRFS info (device dm-0): first mount of filesystem 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac Jan 14 01:05:32.155361 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:05:32.155485 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:05:32.155501 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:05:32.155516 kernel: loop: module loaded Jan 14 01:05:32.155530 kernel: loop0: detected capacity change from 0 to 100552 Jan 14 01:05:32.155546 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:05:32.155562 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:05:32.155775 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:05:32.155791 systemd[1]: Detected virtualization kvm. Jan 14 01:05:32.155807 systemd[1]: Detected architecture x86-64. Jan 14 01:05:32.155821 systemd[1]: Running in initrd. Jan 14 01:05:32.155835 systemd[1]: No hostname configured, using default hostname. Jan 14 01:05:32.155858 systemd[1]: Hostname set to . Jan 14 01:05:32.155873 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:05:32.155888 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:05:32.155901 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:05:32.155915 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:05:32.155928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:05:32.155942 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:05:32.155962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:05:32.155978 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:05:32.155992 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:05:32.156005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:05:32.156018 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:05:32.156036 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:05:32.156052 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:05:32.156068 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:05:32.156082 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:05:32.156095 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:05:32.156109 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:05:32.156125 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:05:32.156147 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:05:32.156162 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:05:32.156177 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:05:32.156193 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:05:32.156208 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:05:32.156224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:05:32.156236 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:05:32.156253 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:05:32.156269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:05:32.156283 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:05:32.156295 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:05:32.156308 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:05:32.156323 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:05:32.156337 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:05:32.156359 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:05:32.156491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:05:32.156507 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:05:32.156531 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:05:32.156791 systemd-journald[318]: Collecting audit messages is enabled. Jan 14 01:05:32.156827 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:05:32.156846 kernel: audit: type=1130 audit(1768352732.110:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.156860 systemd-journald[318]: Journal started Jan 14 01:05:32.156886 systemd-journald[318]: Runtime Journal (/run/log/journal/27a36cd916964c67a6c52129b300a89d) is 6M, max 48.2M, 42.1M free. Jan 14 01:05:32.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.163805 kernel: audit: type=1130 audit(1768352732.162:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.164949 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:05:32.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.225895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:05:32.265993 kernel: audit: type=1130 audit(1768352732.218:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:32.309136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:05:32.381954 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:05:32.406121 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:05:33.367131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:05:33.367175 kernel: Bridge firewalling registered Jan 14 01:05:32.437908 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 14 01:05:33.385356 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:05:33.479029 kernel: audit: type=1130 audit(1768352733.382:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.479067 kernel: audit: type=1130 audit(1768352733.408:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.439316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:05:33.529859 kernel: audit: type=1130 audit(1768352733.492:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.530820 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:05:33.591813 kernel: audit: type=1130 audit(1768352733.530:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.570189 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:05:33.633261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:05:33.646973 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:05:33.692511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:05:33.748292 kernel: audit: type=1130 audit(1768352733.705:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.747270 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:05:33.806249 kernel: audit: type=1130 audit(1768352733.747:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.758791 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:05:33.870522 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:05:33.886331 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:05:34.096171 kernel: audit: type=1130 audit(1768352733.900:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:33.902000 audit: BPF prog-id=6 op=LOAD Jan 14 01:05:34.046003 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:05:34.234296 systemd-resolved[371]: Positive Trust Anchors: Jan 14 01:05:34.234521 systemd-resolved[371]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:05:34.234529 systemd-resolved[371]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:05:34.234568 systemd-resolved[371]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:05:34.394171 systemd-resolved[371]: Defaulting to hostname 'linux'. Jan 14 01:05:34.401025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:05:34.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:34.421192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:05:34.800287 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:05:34.875301 kernel: iscsi: registered transport (tcp) Jan 14 01:05:34.941791 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:05:34.941878 kernel: QLogic iSCSI HBA Driver Jan 14 01:05:35.074569 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:05:35.138940 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:05:35.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:35.181996 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:05:35.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:35.402254 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:05:35.421249 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:05:35.475219 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:05:35.597117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:05:35.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:35.629000 audit: BPF prog-id=7 op=LOAD Jan 14 01:05:35.629000 audit: BPF prog-id=8 op=LOAD Jan 14 01:05:35.634934 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:05:35.733044 systemd-udevd[589]: Using default interface naming scheme 'v257'. Jan 14 01:05:35.768955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:05:35.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:35.795521 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:05:35.927231 dracut-pre-trigger[622]: rd.md=0: removing MD RAID activation Jan 14 01:05:36.081215 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:05:36.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:36.100179 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:05:36.318128 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:05:36.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:36.325000 audit: BPF prog-id=9 op=LOAD Jan 14 01:05:36.327289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:05:36.434921 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:05:36.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:36.479002 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:05:36.612936 systemd-networkd[733]: lo: Link UP Jan 14 01:05:36.612948 systemd-networkd[733]: lo: Gained carrier Jan 14 01:05:36.618071 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 01:05:36.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:36.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:36.651062 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:05:36.682887 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:05:36.798165 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:05:36.748344 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 01:05:36.858357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:05:36.916120 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 01:05:36.940788 systemd[1]: Reached target network.target - Network. Jan 14 01:05:36.984960 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:05:37.026321 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:05:37.061016 kernel: AES CTR mode by8 optimization enabled Jan 14 01:05:37.085746 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 01:05:37.098820 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:05:37.157302 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:05:37.182170 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:05:37.228953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:05:37.229497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:05:37.274080 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:05:37.326934 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 14 01:05:37.326978 kernel: audit: type=1131 audit(1768352737.273:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:37.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:37.361139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:05:37.419204 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:05:37.463331 disk-uuid[849]: Primary Header is updated. Jan 14 01:05:37.463331 disk-uuid[849]: Secondary Entries is updated. Jan 14 01:05:37.463331 disk-uuid[849]: Secondary Header is updated. Jan 14 01:05:38.642903 kernel: audit: type=1130 audit(1768352737.462:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:38.642947 kernel: audit: type=1130 audit(1768352738.582:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:37.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:38.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:37.419219 systemd-networkd[733]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:05:38.661178 disk-uuid[852]: Warning: The kernel is still using the old partition table. Jan 14 01:05:38.661178 disk-uuid[852]: The new table will be used at the next reboot or after you Jan 14 01:05:38.661178 disk-uuid[852]: run partprobe(8) or kpartx(8) Jan 14 01:05:38.661178 disk-uuid[852]: The operation has completed successfully. Jan 14 01:05:38.816129 kernel: audit: type=1130 audit(1768352738.682:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:38.816170 kernel: audit: type=1131 audit(1768352738.682:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:38.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:38.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:37.433560 systemd-networkd[733]: eth0: Link UP Jan 14 01:05:37.435309 systemd-networkd[733]: eth0: Gained carrier Jan 14 01:05:37.435329 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:05:37.457161 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:05:37.533068 systemd-networkd[733]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:05:38.566819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:05:38.643237 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:05:38.643554 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:05:38.687333 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:05:39.100911 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Jan 14 01:05:39.133209 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:05:39.133276 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:05:39.184991 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:05:39.185098 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:05:39.235851 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:05:39.253008 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:05:39.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:39.291096 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:05:39.361008 kernel: audit: type=1130 audit(1768352739.285:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:39.362034 systemd-networkd[733]: eth0: Gained IPv6LL Jan 14 01:05:39.706899 ignition[885]: Ignition 2.24.0 Jan 14 01:05:39.707011 ignition[885]: Stage: fetch-offline Jan 14 01:05:39.707085 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:05:39.707106 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:05:39.707548 ignition[885]: parsed url from cmdline: "" Jan 14 01:05:39.707554 ignition[885]: no config URL provided Jan 14 01:05:39.707563 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:05:39.707802 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:05:39.707868 ignition[885]: op(1): [started] loading QEMU firmware config module Jan 14 01:05:39.707879 ignition[885]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 01:05:39.804784 ignition[885]: op(1): [finished] loading QEMU firmware config module Jan 14 01:05:41.620295 ignition[885]: parsing config with SHA512: 73bf8da899d6921910c48f08443606ef357cc2ee1c3ec9693cf858ecadc24670d7a2d9dc2743eda2f20d10dcf999f8b66ca0f4b297c92f1f5b9c3d1b6709989c Jan 14 01:05:41.653318 unknown[885]: fetched base config from "system" Jan 14 01:05:41.654939 ignition[885]: fetch-offline: fetch-offline passed Jan 14 01:05:41.653334 unknown[885]: fetched user config from "qemu" Jan 14 01:05:41.655025 ignition[885]: Ignition finished successfully Jan 14 01:05:41.703191 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:05:41.810227 kernel: audit: type=1130 audit(1768352741.740:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:41.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:41.745085 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 01:05:41.750980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:05:42.006936 ignition[894]: Ignition 2.24.0 Jan 14 01:05:42.007058 ignition[894]: Stage: kargs Jan 14 01:05:42.007279 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:05:42.007293 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:05:42.030245 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:05:42.112871 kernel: audit: type=1130 audit(1768352742.065:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:42.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:42.010539 ignition[894]: kargs: kargs passed Jan 14 01:05:42.068984 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:05:42.010844 ignition[894]: Ignition finished successfully Jan 14 01:05:42.278357 ignition[903]: Ignition 2.24.0 Jan 14 01:05:42.278861 ignition[903]: Stage: disks Jan 14 01:05:42.279088 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:05:42.279108 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:05:42.328813 ignition[903]: disks: disks passed Jan 14 01:05:42.329005 ignition[903]: Ignition finished successfully Jan 14 01:05:42.345402 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:05:42.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:42.394111 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:05:42.465175 kernel: audit: type=1130 audit(1768352742.392:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:42.431291 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:05:42.512013 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:05:42.530233 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:05:42.580879 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:05:42.584227 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:05:42.845320 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 01:05:42.875879 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:05:42.956195 kernel: audit: type=1130 audit(1768352742.900:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:42.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:42.953056 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:05:44.267910 kernel: EXT4-fs (vda9): mounted filesystem 6efdc615-0e3c-4caf-8d0b-1f38e5c59ef0 r/w with ordered data mode. Quota mode: none. Jan 14 01:05:44.273334 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:05:44.289249 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:05:44.317552 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:05:44.346279 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:05:44.353077 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:05:44.353138 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:05:44.353175 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:05:44.538027 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Jan 14 01:05:44.412060 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:05:44.544240 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:05:44.598065 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:05:44.598133 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:05:44.696919 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:05:44.696989 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:05:44.705396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:05:46.043235 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:05:46.126344 kernel: audit: type=1130 audit(1768352746.067:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:46.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:46.073132 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:05:46.130800 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:05:46.257768 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:05:46.300962 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:05:46.417317 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:05:46.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:46.484096 kernel: audit: type=1130 audit(1768352746.450:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:46.500317 ignition[1018]: INFO : Ignition 2.24.0 Jan 14 01:05:46.500317 ignition[1018]: INFO : Stage: mount Jan 14 01:05:46.547354 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:05:46.547354 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:05:46.616425 ignition[1018]: INFO : mount: mount passed Jan 14 01:05:46.616425 ignition[1018]: INFO : Ignition finished successfully Jan 14 01:05:46.658948 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:05:46.738076 kernel: audit: type=1130 audit(1768352746.659:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:46.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:46.668224 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:05:46.836382 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:05:46.984022 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1031) Jan 14 01:05:47.007198 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:05:47.007256 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:05:47.104807 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:05:47.104913 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:05:47.125266 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:05:47.338045 ignition[1047]: INFO : Ignition 2.24.0 Jan 14 01:05:47.359259 ignition[1047]: INFO : Stage: files Jan 14 01:05:47.387007 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:05:47.387007 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:05:47.480325 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:05:47.498237 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:05:47.498237 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:05:47.537252 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:05:47.537252 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:05:47.537252 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:05:47.533342 unknown[1047]: wrote ssh authorized keys file for user: core Jan 14 01:05:47.632007 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:05:47.632007 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:05:47.944158 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:05:48.292960 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:05:48.319999 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:05:48.550395 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:05:48.550395 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:05:48.550395 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:05:48.550395 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:05:48.550395 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:05:48.550395 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 01:05:49.000004 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:05:50.149881 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 01:05:50.193073 ignition[1047]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 01:05:50.587831 ignition[1047]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:05:50.619171 ignition[1047]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:05:50.619171 ignition[1047]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 01:05:50.619171 ignition[1047]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:05:50.619171 ignition[1047]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:05:50.619171 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:05:50.619171 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:05:50.619171 ignition[1047]: INFO : files: files passed Jan 14 01:05:50.619171 ignition[1047]: INFO : Ignition finished successfully Jan 14 01:05:50.864330 kernel: audit: type=1130 audit(1768352750.686:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:50.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:50.625865 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:05:50.695213 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:05:50.833998 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:05:50.917266 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 01:05:50.944956 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:05:50.944956 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:05:51.107378 kernel: audit: type=1130 audit(1768352750.983:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.107427 kernel: audit: type=1131 audit(1768352750.984:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:50.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:50.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.107939 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:05:50.962177 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:05:51.203966 kernel: audit: type=1130 audit(1768352751.132:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:50.962827 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:05:50.989037 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:05:51.209144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:05:51.275904 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:05:51.559202 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:05:51.679227 kernel: audit: type=1130 audit(1768352751.585:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.679296 kernel: audit: type=1131 audit(1768352751.586:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.560067 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:05:51.587346 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:05:51.698969 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:05:51.725158 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:05:51.727225 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:05:51.938424 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:05:52.042206 kernel: audit: type=1130 audit(1768352751.958:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:51.965259 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:05:52.108173 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:05:52.108882 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:05:52.130252 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:05:52.214023 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:05:52.233932 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:05:52.234102 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:05:52.250015 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:05:52.265183 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:05:52.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:52.372769 kernel: audit: type=1131 audit(1768352752.237:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:52.374913 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:05:52.414150 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:05:52.423160 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:05:52.451829 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:05:52.489054 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:05:52.521346 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:05:52.559313 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:05:52.598389 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:05:52.628411 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:05:52.664833 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:05:52.666391 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:05:52.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:52.728846 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:05:52.821353 kernel: audit: type=1131 audit(1768352752.727:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:52.782358 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:05:52.802247 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:05:52.936085 kernel: audit: type=1131 audit(1768352752.877:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:52.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:52.805356 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:05:52.832299 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:05:52.833042 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:05:52.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:52.963317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:05:52.964088 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:05:52.980237 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:05:52.990234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:05:52.995194 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:05:53.094402 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:05:53.146082 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:05:53.200319 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:05:53.200998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:05:53.250383 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:05:53.250904 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:05:53.298320 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:05:53.299002 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:05:53.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.316412 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:05:53.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.317248 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:05:53.352424 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:05:53.355059 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:05:53.416415 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:05:53.506451 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:05:53.523950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:05:53.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.524148 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:05:53.563394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:05:53.619235 ignition[1105]: INFO : Ignition 2.24.0 Jan 14 01:05:53.619235 ignition[1105]: INFO : Stage: umount Jan 14 01:05:53.619235 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:05:53.619235 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:05:53.619235 ignition[1105]: INFO : umount: umount passed Jan 14 01:05:53.619235 ignition[1105]: INFO : Ignition finished successfully Jan 14 01:05:53.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.663840 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:05:53.705062 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:05:53.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.705257 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:05:53.779060 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:05:54.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.782115 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:05:53.782390 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:05:53.804230 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:05:53.804982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:05:53.813878 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:05:54.119000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:05:53.814170 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:05:54.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.824135 systemd[1]: Stopped target network.target - Network. Jan 14 01:05:53.824382 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:05:54.199000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:05:53.824828 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:05:53.842840 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:05:53.842939 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:05:53.864072 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:05:53.864181 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:05:53.888081 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:05:53.888211 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:05:53.906182 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:05:53.906309 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:05:53.929149 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:05:53.942256 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:05:54.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.993356 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:05:54.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:53.993990 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:05:54.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:54.120921 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:05:54.121245 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:05:54.225443 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:05:54.236953 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:05:54.237045 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:05:54.394136 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:05:54.408370 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:05:54.408837 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:05:54.461165 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:05:54.461283 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:05:54.478819 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:05:54.478910 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:05:54.512152 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:05:54.791288 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:05:54.792319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:05:54.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:54.814877 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:05:54.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:54.815246 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:05:54.840958 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:05:54.841041 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:05:54.887035 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:05:54.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:54.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:54.887120 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:05:55.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:54.909900 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:05:54.909994 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:05:54.972318 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:05:54.972412 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:05:55.005174 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:05:55.005308 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:05:55.025973 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:05:55.036405 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:05:55.036847 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:05:55.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:55.221419 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:05:55.221862 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:05:55.222231 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 01:05:55.222299 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:05:55.295120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:05:55.295237 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:05:55.316390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:05:55.316821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:05:55.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:55.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:55.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:55.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:55.480898 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:05:55.482095 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:05:55.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:55.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:05:55.540291 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:05:55.585376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:05:55.675980 systemd[1]: Switching root. Jan 14 01:05:55.798385 systemd-journald[318]: Journal stopped Jan 14 01:06:10.740173 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Jan 14 01:06:10.741076 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 01:06:10.741114 kernel: audit: type=1335 audit(1768352755.813:83): pid=318 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 14 01:06:10.741143 kernel: hrtimer: interrupt took 4071221 ns Jan 14 01:06:10.741160 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:06:10.741181 kernel: SELinux: policy capability open_perms=1 Jan 14 01:06:10.741199 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:06:10.741220 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:06:10.741236 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:06:10.741252 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:06:10.741268 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:06:10.741283 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:06:10.741303 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:06:10.741320 kernel: audit: type=1403 audit(1768352756.730:84): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 01:06:10.741341 systemd[1]: Successfully loaded SELinux policy in 517.152ms. Jan 14 01:06:10.741360 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 86.398ms. Jan 14 01:06:10.741378 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:06:10.741399 systemd[1]: Detected virtualization kvm. Jan 14 01:06:10.741416 systemd[1]: Detected architecture x86-64. Jan 14 01:06:10.741440 systemd[1]: Detected first boot. Jan 14 01:06:10.741457 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:06:10.741473 kernel: audit: type=1334 audit(1768352757.818:85): prog-id=10 op=LOAD Jan 14 01:06:10.742209 kernel: audit: type=1334 audit(1768352757.818:86): prog-id=10 op=UNLOAD Jan 14 01:06:10.742234 kernel: audit: type=1334 audit(1768352757.818:87): prog-id=11 op=LOAD Jan 14 01:06:10.742253 kernel: audit: type=1334 audit(1768352757.818:88): prog-id=11 op=UNLOAD Jan 14 01:06:10.742271 zram_generator::config[1149]: No configuration found. Jan 14 01:06:10.742299 kernel: Guest personality initialized and is inactive Jan 14 01:06:10.742316 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:06:10.742336 kernel: Initialized host personality Jan 14 01:06:10.742358 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:06:10.742375 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:06:10.742392 kernel: audit: type=1334 audit(1768352764.187:89): prog-id=12 op=LOAD Jan 14 01:06:10.742407 kernel: audit: type=1334 audit(1768352764.187:90): prog-id=3 op=UNLOAD Jan 14 01:06:10.742427 kernel: audit: type=1334 audit(1768352764.187:91): prog-id=13 op=LOAD Jan 14 01:06:10.742446 kernel: audit: type=1334 audit(1768352764.187:92): prog-id=14 op=LOAD Jan 14 01:06:10.742463 kernel: audit: type=1334 audit(1768352764.190:93): prog-id=4 op=UNLOAD Jan 14 01:06:10.742478 kernel: audit: type=1334 audit(1768352764.190:94): prog-id=5 op=UNLOAD Jan 14 01:06:10.742494 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:06:10.742514 kernel: audit: type=1131 audit(1768352764.196:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.743222 kernel: audit: type=1334 audit(1768352764.373:96): prog-id=12 op=UNLOAD Jan 14 01:06:10.743255 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:06:10.743273 kernel: audit: type=1130 audit(1768352764.428:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.743290 kernel: audit: type=1131 audit(1768352764.430:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.743307 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:06:10.743332 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:06:10.743360 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:06:10.743379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:06:10.743396 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:06:10.743412 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:06:10.743429 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:06:10.743445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:06:10.743462 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:06:10.743487 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:06:10.743505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:06:10.743522 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:06:10.743911 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:06:10.743938 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:06:10.743961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:06:10.743979 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:06:10.744126 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:06:10.744152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:06:10.744169 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:06:10.744186 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:06:10.744209 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:06:10.744226 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:06:10.744245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:06:10.744271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:06:10.744288 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:06:10.744306 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:06:10.744322 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:06:10.744339 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:06:10.744355 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:06:10.744375 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:06:10.744396 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:06:10.744413 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:06:10.744430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:06:10.744446 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:06:10.744463 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:06:10.744481 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:06:10.744500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:06:10.744525 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:06:10.746964 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:06:10.746987 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:06:10.747005 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:06:10.747023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:10.747040 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:06:10.747057 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:06:10.747086 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:06:10.747104 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:06:10.747121 systemd[1]: Reached target machines.target - Containers. Jan 14 01:06:10.747138 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:06:10.747154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:06:10.747172 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:06:10.747191 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:06:10.747214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:06:10.747231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:06:10.747249 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:06:10.747265 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:06:10.747854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:06:10.747880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:06:10.747899 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:06:10.747929 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:06:10.747949 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:06:10.747968 kernel: audit: type=1131 audit(1768352770.093:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.747987 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:06:10.748005 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:06:10.748024 kernel: audit: type=1131 audit(1768352770.206:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.748047 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:06:10.748066 kernel: audit: type=1334 audit(1768352770.315:102): prog-id=14 op=UNLOAD Jan 14 01:06:10.748088 kernel: audit: type=1334 audit(1768352770.315:103): prog-id=13 op=UNLOAD Jan 14 01:06:10.748105 kernel: audit: type=1334 audit(1768352770.329:104): prog-id=15 op=LOAD Jan 14 01:06:10.748121 kernel: ACPI: bus type drm_connector registered Jan 14 01:06:10.748137 kernel: audit: type=1334 audit(1768352770.359:105): prog-id=16 op=LOAD Jan 14 01:06:10.748152 kernel: audit: type=1334 audit(1768352770.384:106): prog-id=17 op=LOAD Jan 14 01:06:10.748175 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:06:10.748195 kernel: fuse: init (API version 7.41) Jan 14 01:06:10.748215 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:06:10.748233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:06:10.748258 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:06:10.748404 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:06:10.748458 systemd-journald[1235]: Collecting audit messages is enabled. Jan 14 01:06:10.748483 kernel: audit: type=1305 audit(1768352770.692:107): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:06:10.748499 kernel: audit: type=1300 audit(1768352770.692:107): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc55210320 a2=4000 a3=0 items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:10.748516 systemd-journald[1235]: Journal started Jan 14 01:06:10.748855 systemd-journald[1235]: Runtime Journal (/run/log/journal/27a36cd916964c67a6c52129b300a89d) is 6M, max 48.2M, 42.1M free. Jan 14 01:06:10.825981 kernel: audit: type=1327 audit(1768352770.692:107): proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:06:07.107000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:06:10.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.315000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:06:10.315000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:06:10.329000 audit: BPF prog-id=15 op=LOAD Jan 14 01:06:10.359000 audit: BPF prog-id=16 op=LOAD Jan 14 01:06:10.384000 audit: BPF prog-id=17 op=LOAD Jan 14 01:06:10.692000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:06:10.692000 audit[1235]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc55210320 a2=4000 a3=0 items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:10.692000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:06:04.131305 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:06:04.191866 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 01:06:04.195164 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:06:04.197222 systemd[1]: systemd-journald.service: Consumed 4.069s CPU time. Jan 14 01:06:10.847963 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:06:10.903942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:10.927138 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:06:10.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:10.950498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:06:10.968289 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:06:10.987153 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:06:11.006959 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:06:11.027505 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:06:11.050523 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:06:11.070485 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:06:11.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.096073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:06:11.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.120981 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:06:11.121423 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:06:11.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.145353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:06:11.147256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:06:11.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.173060 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:06:11.174028 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:06:11.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.194270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:06:11.194521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:06:11.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.219249 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:06:11.221335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:06:11.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.242307 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:06:11.242996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:06:11.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.266304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:06:11.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.294274 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:06:11.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.324124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:06:11.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.351436 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:06:11.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.388308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:06:11.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.472231 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:06:11.499355 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:06:11.537000 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:06:11.566260 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:06:11.589180 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:06:11.590054 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:06:11.612294 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:06:11.641217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:06:11.642435 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:06:11.655025 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:06:11.681525 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:06:11.707207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:06:11.726284 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:06:11.752311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:06:11.758512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:06:11.788211 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:06:11.814262 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:06:11.844166 systemd-journald[1235]: Time spent on flushing to /var/log/journal/27a36cd916964c67a6c52129b300a89d is 101.016ms for 1141 entries. Jan 14 01:06:11.844166 systemd-journald[1235]: System Journal (/var/log/journal/27a36cd916964c67a6c52129b300a89d) is 8M, max 163.5M, 155.5M free. Jan 14 01:06:12.078471 systemd-journald[1235]: Received client request to flush runtime journal. Jan 14 01:06:12.078533 kernel: loop1: detected capacity change from 0 to 229808 Jan 14 01:06:11.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:11.854299 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:06:11.888439 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:06:11.923341 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:06:11.949450 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:06:11.977056 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:06:12.015471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:06:12.021210 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jan 14 01:06:12.021228 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jan 14 01:06:12.046138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:06:12.073418 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:06:12.103118 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:06:12.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.149218 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:06:12.153369 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:06:12.163948 kernel: loop2: detected capacity change from 0 to 50784 Jan 14 01:06:12.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.291530 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:06:12.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.318000 audit: BPF prog-id=18 op=LOAD Jan 14 01:06:12.318000 audit: BPF prog-id=19 op=LOAD Jan 14 01:06:12.319000 audit: BPF prog-id=20 op=LOAD Jan 14 01:06:12.322299 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:06:12.342469 kernel: loop3: detected capacity change from 0 to 111560 Jan 14 01:06:12.348000 audit: BPF prog-id=21 op=LOAD Jan 14 01:06:12.352939 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:06:12.378963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:06:12.409000 audit: BPF prog-id=22 op=LOAD Jan 14 01:06:12.409000 audit: BPF prog-id=23 op=LOAD Jan 14 01:06:12.410000 audit: BPF prog-id=24 op=LOAD Jan 14 01:06:12.416128 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:06:12.435000 audit: BPF prog-id=25 op=LOAD Jan 14 01:06:12.436000 audit: BPF prog-id=26 op=LOAD Jan 14 01:06:12.436000 audit: BPF prog-id=27 op=LOAD Jan 14 01:06:12.440140 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:06:12.512381 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 14 01:06:12.512531 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 14 01:06:12.549842 kernel: loop4: detected capacity change from 0 to 229808 Jan 14 01:06:12.532455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:06:12.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.625991 systemd-nsresourced[1294]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:06:12.629519 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:06:12.648251 kernel: loop5: detected capacity change from 0 to 50784 Jan 14 01:06:12.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.670156 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:06:12.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.727306 kernel: loop6: detected capacity change from 0 to 111560 Jan 14 01:06:12.835432 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:06:12.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:12.856000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:06:12.856000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:06:12.857515 (sd-merge)[1299]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 01:06:12.862000 audit: BPF prog-id=28 op=LOAD Jan 14 01:06:12.863000 audit: BPF prog-id=29 op=LOAD Jan 14 01:06:12.866297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:06:12.870042 (sd-merge)[1299]: Merged extensions into '/usr'. Jan 14 01:06:12.894450 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:06:12.895000 systemd[1]: Reloading... Jan 14 01:06:12.896056 systemd-oomd[1291]: No swap; memory pressure usage will be degraded Jan 14 01:06:12.942463 systemd-resolved[1292]: Positive Trust Anchors: Jan 14 01:06:12.944079 systemd-resolved[1292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:06:12.944211 systemd-resolved[1292]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:06:12.944240 systemd-resolved[1292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:06:12.955514 systemd-resolved[1292]: Defaulting to hostname 'linux'. Jan 14 01:06:12.983460 systemd-udevd[1315]: Using default interface naming scheme 'v257'. Jan 14 01:06:13.064056 zram_generator::config[1341]: No configuration found. Jan 14 01:06:13.452930 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:06:13.491091 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 01:06:13.506183 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:06:13.567050 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 01:06:13.587051 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 01:06:13.762167 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:06:13.763123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:06:13.788143 systemd[1]: Reloading finished in 892 ms. Jan 14 01:06:13.855481 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:06:13.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:13.895278 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:06:13.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:13.914046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:06:13.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:13.937300 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:06:13.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:14.056335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:06:14.133515 systemd[1]: Starting ensure-sysext.service... Jan 14 01:06:14.156439 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:06:14.177000 audit: BPF prog-id=30 op=LOAD Jan 14 01:06:14.185241 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:06:14.215097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:06:14.265276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:06:14.293000 audit: BPF prog-id=31 op=LOAD Jan 14 01:06:14.293000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:06:14.293000 audit: BPF prog-id=32 op=LOAD Jan 14 01:06:14.294000 audit: BPF prog-id=33 op=LOAD Jan 14 01:06:14.294000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:06:14.294000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:06:14.304000 audit: BPF prog-id=34 op=LOAD Jan 14 01:06:14.305000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:06:14.310000 audit: BPF prog-id=35 op=LOAD Jan 14 01:06:14.370000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:06:14.376000 audit: BPF prog-id=36 op=LOAD Jan 14 01:06:14.383000 audit: BPF prog-id=37 op=LOAD Jan 14 01:06:14.383000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:06:14.383000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:06:14.399523 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:06:14.400342 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:06:14.408413 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:06:14.413000 audit: BPF prog-id=38 op=LOAD Jan 14 01:06:14.413000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:06:14.415000 audit: BPF prog-id=39 op=LOAD Jan 14 01:06:14.418000 audit: BPF prog-id=40 op=LOAD Jan 14 01:06:14.420000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:06:14.420000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:06:14.420193 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Jan 14 01:06:14.420345 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Jan 14 01:06:14.430000 audit: BPF prog-id=41 op=LOAD Jan 14 01:06:14.430000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:06:14.433000 audit: BPF prog-id=42 op=LOAD Jan 14 01:06:14.439000 audit: BPF prog-id=43 op=LOAD Jan 14 01:06:14.439000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:06:14.439000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:06:14.450000 audit: BPF prog-id=44 op=LOAD Jan 14 01:06:14.456000 audit: BPF prog-id=45 op=LOAD Jan 14 01:06:14.456000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:06:14.456000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:06:14.473275 systemd-tmpfiles[1425]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:06:14.473300 systemd-tmpfiles[1425]: Skipping /boot Jan 14 01:06:14.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:14.489075 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:06:14.498367 systemd[1]: Reload requested from client PID 1422 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:06:14.498388 systemd[1]: Reloading... Jan 14 01:06:14.573415 systemd-tmpfiles[1425]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:06:14.573545 systemd-tmpfiles[1425]: Skipping /boot Jan 14 01:06:14.838909 zram_generator::config[1470]: No configuration found. Jan 14 01:06:14.943116 systemd-networkd[1424]: lo: Link UP Jan 14 01:06:14.943138 systemd-networkd[1424]: lo: Gained carrier Jan 14 01:06:14.972175 systemd-networkd[1424]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:06:14.972188 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:06:15.028406 systemd-networkd[1424]: eth0: Link UP Jan 14 01:06:15.035821 systemd-networkd[1424]: eth0: Gained carrier Jan 14 01:06:15.036207 systemd-networkd[1424]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:06:15.311209 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:06:15.619186 kernel: kvm_amd: TSC scaling supported Jan 14 01:06:15.619311 kernel: kvm_amd: Nested Virtualization enabled Jan 14 01:06:15.619340 kernel: kvm_amd: Nested Paging enabled Jan 14 01:06:15.619364 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 01:06:15.619395 kernel: kvm_amd: PMU virtualization is disabled Jan 14 01:06:16.266040 systemd[1]: Reloading finished in 1766 ms. Jan 14 01:06:16.337194 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:06:16.373977 kernel: EDAC MC: Ver: 3.0.0 Jan 14 01:06:16.416333 systemd-networkd[1424]: eth0: Gained IPv6LL Jan 14 01:06:16.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:16.871179 kernel: kauditd_printk_skb: 80 callbacks suppressed Jan 14 01:06:16.871247 kernel: audit: type=1130 audit(1768352776.840:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:16.849000 audit: BPF prog-id=46 op=LOAD Jan 14 01:06:16.849000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:06:16.849000 audit: BPF prog-id=47 op=LOAD Jan 14 01:06:16.849000 audit: BPF prog-id=48 op=LOAD Jan 14 01:06:16.849000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:06:16.849000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:06:16.850000 audit: BPF prog-id=49 op=LOAD Jan 14 01:06:16.850000 audit: BPF prog-id=50 op=LOAD Jan 14 01:06:16.850000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:06:16.895854 kernel: audit: type=1334 audit(1768352776.849:189): prog-id=46 op=LOAD Jan 14 01:06:16.895901 kernel: audit: type=1334 audit(1768352776.849:190): prog-id=31 op=UNLOAD Jan 14 01:06:16.895928 kernel: audit: type=1334 audit(1768352776.849:191): prog-id=47 op=LOAD Jan 14 01:06:16.895960 kernel: audit: type=1334 audit(1768352776.849:192): prog-id=48 op=LOAD Jan 14 01:06:16.895984 kernel: audit: type=1334 audit(1768352776.849:193): prog-id=32 op=UNLOAD Jan 14 01:06:16.896016 kernel: audit: type=1334 audit(1768352776.849:194): prog-id=33 op=UNLOAD Jan 14 01:06:16.896041 kernel: audit: type=1334 audit(1768352776.850:195): prog-id=49 op=LOAD Jan 14 01:06:16.896064 kernel: audit: type=1334 audit(1768352776.850:196): prog-id=50 op=LOAD Jan 14 01:06:16.896087 kernel: audit: type=1334 audit(1768352776.850:197): prog-id=44 op=UNLOAD Jan 14 01:06:16.850000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:06:16.852000 audit: BPF prog-id=51 op=LOAD Jan 14 01:06:16.852000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:06:16.854000 audit: BPF prog-id=52 op=LOAD Jan 14 01:06:16.854000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:06:16.854000 audit: BPF prog-id=53 op=LOAD Jan 14 01:06:16.854000 audit: BPF prog-id=54 op=LOAD Jan 14 01:06:16.854000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:06:16.854000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:06:16.856000 audit: BPF prog-id=55 op=LOAD Jan 14 01:06:16.856000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:06:16.859000 audit: BPF prog-id=56 op=LOAD Jan 14 01:06:16.859000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:06:16.859000 audit: BPF prog-id=57 op=LOAD Jan 14 01:06:16.859000 audit: BPF prog-id=58 op=LOAD Jan 14 01:06:16.859000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:06:16.859000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:06:16.862000 audit: BPF prog-id=59 op=LOAD Jan 14 01:06:16.862000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:06:16.862000 audit: BPF prog-id=60 op=LOAD Jan 14 01:06:16.862000 audit: BPF prog-id=61 op=LOAD Jan 14 01:06:16.862000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:06:16.862000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:06:16.895372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:06:17.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.065285 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:06:17.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.123371 systemd[1]: Reached target network.target - Network. Jan 14 01:06:17.148809 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:06:17.168171 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:06:17.209279 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:06:17.228950 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:06:17.252431 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:06:17.285853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:06:17.316121 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:06:17.337831 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:06:17.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.369460 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:06:17.388151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:17.388831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:06:17.400063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:06:17.424284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:06:17.427000 audit[1520]: SYSTEM_BOOT pid=1520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.448348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:06:17.473489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:06:17.474832 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:06:17.474981 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:06:17.475116 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:17.479514 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:06:17.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.504162 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:06:17.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.531515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:06:17.536317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:06:17.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.559552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:06:17.560980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:06:17.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.587087 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:06:17.587850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:06:17.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:17.629481 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:17.631164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:06:17.632000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:06:17.632000 audit[1539]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0c2cf5c0 a2=420 a3=0 items=0 ppid=1508 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:17.632000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:06:17.634479 augenrules[1539]: No rules Jan 14 01:06:17.635252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:06:17.657104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:06:17.695513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:06:17.713028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:06:17.714027 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:06:17.714273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:06:17.714856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:17.721338 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:06:17.722126 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:06:17.743368 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:06:17.771546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:06:17.779275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:06:17.808423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:06:17.809107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:06:17.831207 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:06:17.855961 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:06:17.856550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:06:17.909830 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:17.914504 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:06:17.931255 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:06:17.936200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:06:17.980183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:06:18.018456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:06:18.053368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:06:18.073953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:06:18.075374 augenrules[1558]: /sbin/augenrules: No change Jan 14 01:06:18.076045 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:06:18.076143 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:06:18.076248 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:06:18.076317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:06:18.085453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:06:18.086165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:06:18.110377 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:06:18.112073 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:06:18.120000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:06:18.120000 audit[1579]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe049119d0 a2=420 a3=0 items=0 ppid=1558 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:18.120000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:06:18.123000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:06:18.123000 audit[1579]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe04913e60 a2=420 a3=0 items=0 ppid=1558 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:18.123000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:06:18.125018 augenrules[1579]: No rules Jan 14 01:06:18.135191 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:06:18.135979 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:06:18.153208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:06:18.153997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:06:18.175061 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:06:18.176213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:06:18.217253 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:06:18.217933 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:06:18.221199 systemd[1]: Finished ensure-sysext.service. Jan 14 01:06:18.256334 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 01:06:18.446029 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 01:06:18.912587 systemd-resolved[1292]: Clock change detected. Flushing caches. Jan 14 01:06:18.912619 systemd-timesyncd[1590]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 01:06:18.913267 systemd-timesyncd[1590]: Initial clock synchronization to Wed 2026-01-14 01:06:18.912255 UTC. Jan 14 01:06:18.940331 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:06:19.652242 ldconfig[1510]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:06:19.671420 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:06:19.694013 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:06:19.757277 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:06:19.784145 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:06:19.805913 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:06:19.830547 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:06:19.854888 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:06:19.877463 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:06:19.898056 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:06:19.922201 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:06:19.944328 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:06:19.964490 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:06:19.988277 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:06:19.988467 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:06:20.006225 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:06:20.024094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:06:20.045568 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:06:20.066550 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:06:20.090976 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:06:20.113104 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:06:20.139982 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:06:20.158168 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:06:20.178999 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:06:20.196487 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:06:20.211374 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:06:20.228195 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:06:20.228371 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:06:20.233486 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:06:20.254374 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 01:06:20.284616 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:06:20.307530 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:06:20.332919 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:06:20.355284 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:06:20.375621 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:06:20.387561 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:06:20.416008 extend-filesystems[1604]: Found /dev/vda6 Jan 14 01:06:20.431602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:06:20.455003 extend-filesystems[1604]: Found /dev/vda9 Jan 14 01:06:20.471173 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing passwd entry cache Jan 14 01:06:20.493448 jq[1603]: false Jan 14 01:06:20.458107 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:06:20.495567 extend-filesystems[1604]: Checking size of /dev/vda9 Jan 14 01:06:20.472103 oslogin_cache_refresh[1605]: Refreshing passwd entry cache Jan 14 01:06:20.496361 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:06:20.527615 extend-filesystems[1604]: Resized partition /dev/vda9 Jan 14 01:06:20.580168 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 01:06:20.499341 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:06:20.580470 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting users, quitting Jan 14 01:06:20.580470 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:06:20.580470 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing group entry cache Jan 14 01:06:20.580584 extend-filesystems[1621]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:06:20.565203 oslogin_cache_refresh[1605]: Failure getting users, quitting Jan 14 01:06:20.572217 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:06:20.565227 oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:06:20.611077 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:06:20.565280 oslogin_cache_refresh[1605]: Refreshing group entry cache Jan 14 01:06:20.657200 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting groups, quitting Jan 14 01:06:20.657200 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:06:20.652228 oslogin_cache_refresh[1605]: Failure getting groups, quitting Jan 14 01:06:20.652245 oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:06:20.662455 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:06:20.681288 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:06:20.682240 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:06:20.687116 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:06:20.708104 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:06:20.745622 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:06:20.769169 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:06:20.775045 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:06:20.776053 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:06:20.776570 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:06:20.796560 jq[1639]: true Jan 14 01:06:20.797150 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 01:06:20.805500 update_engine[1637]: I20260114 01:06:20.805174 1637 main.cc:92] Flatcar Update Engine starting Jan 14 01:06:20.813028 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:06:20.813339 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:06:20.851051 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:06:20.852368 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:06:20.859137 extend-filesystems[1621]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 01:06:20.859137 extend-filesystems[1621]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 01:06:20.859137 extend-filesystems[1621]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 01:06:20.876188 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:06:20.941533 extend-filesystems[1604]: Resized filesystem in /dev/vda9 Jan 14 01:06:20.961559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:06:20.983248 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:06:21.044430 sshd_keygen[1645]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:06:21.065591 jq[1659]: true Jan 14 01:06:21.081443 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 01:06:21.082968 tar[1652]: linux-amd64/LICENSE Jan 14 01:06:21.082229 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 01:06:21.084257 tar[1652]: linux-amd64/helm Jan 14 01:06:21.141003 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:06:21.198265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:06:21.214233 systemd-logind[1635]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:06:21.215192 systemd-logind[1635]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:06:21.217945 systemd-logind[1635]: New seat seat0. Jan 14 01:06:21.247459 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:06:21.258120 dbus-daemon[1601]: [system] SELinux support is enabled Jan 14 01:06:21.271197 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:06:21.288566 update_engine[1637]: I20260114 01:06:21.288378 1637 update_check_scheduler.cc:74] Next update check in 2m33s Jan 14 01:06:21.302313 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:06:21.303527 dbus-daemon[1601]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 01:06:21.321590 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:06:21.322052 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:06:21.349108 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:06:21.349152 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:06:21.373152 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:06:21.384151 bash[1705]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:06:21.392370 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:06:21.426172 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 01:06:21.437163 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:06:21.473338 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:06:21.475507 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:06:21.501293 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:06:21.565216 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:06:21.593995 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:06:21.625023 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:06:21.652003 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:06:21.676041 locksmithd[1711]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:06:21.993055 containerd[1661]: time="2026-01-14T01:06:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:06:21.997914 containerd[1661]: time="2026-01-14T01:06:21.996903843Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.329215030Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="148.278µs" Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.329407178Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.329485645Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.329505222Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.330472678Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.330503705Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.331327414Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.331352921Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.333042876Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.333064827Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.333080867Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:06:22.334255 containerd[1661]: time="2026-01-14T01:06:22.333095464Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:06:22.335941 containerd[1661]: time="2026-01-14T01:06:22.333493818Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:06:22.335941 containerd[1661]: time="2026-01-14T01:06:22.333512513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:06:22.335941 containerd[1661]: time="2026-01-14T01:06:22.334461013Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:06:22.335941 containerd[1661]: time="2026-01-14T01:06:22.335930778Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:06:22.336024 containerd[1661]: time="2026-01-14T01:06:22.335977535Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:06:22.336024 containerd[1661]: time="2026-01-14T01:06:22.335997472Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:06:22.336060 containerd[1661]: time="2026-01-14T01:06:22.336039110Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:06:22.353268 containerd[1661]: time="2026-01-14T01:06:22.353179065Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:06:22.356171 containerd[1661]: time="2026-01-14T01:06:22.354487500Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:06:22.379224 containerd[1661]: time="2026-01-14T01:06:22.378924743Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:06:22.379224 containerd[1661]: time="2026-01-14T01:06:22.379105320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379360266Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379529632Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379554618Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379573344Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379588462Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379601496Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379616033Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379631362Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379951279Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379968431Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379981856Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:06:22.380221 containerd[1661]: time="2026-01-14T01:06:22.379996975Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:06:22.380937 containerd[1661]: time="2026-01-14T01:06:22.380315910Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:06:22.380937 containerd[1661]: time="2026-01-14T01:06:22.380343251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:06:22.380937 containerd[1661]: time="2026-01-14T01:06:22.380362417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:06:22.380937 containerd[1661]: time="2026-01-14T01:06:22.380376533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:06:22.380937 containerd[1661]: time="2026-01-14T01:06:22.380391872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:06:22.380937 containerd[1661]: time="2026-01-14T01:06:22.380405998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:06:22.380937 containerd[1661]: time="2026-01-14T01:06:22.380579062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:06:22.383564 containerd[1661]: time="2026-01-14T01:06:22.382200569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:06:22.383564 containerd[1661]: time="2026-01-14T01:06:22.383113082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:06:22.383899 containerd[1661]: time="2026-01-14T01:06:22.383563804Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:06:22.383899 containerd[1661]: time="2026-01-14T01:06:22.383589111Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:06:22.383899 containerd[1661]: time="2026-01-14T01:06:22.383620249Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:06:22.384604 containerd[1661]: time="2026-01-14T01:06:22.383968078Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:06:22.384604 containerd[1661]: time="2026-01-14T01:06:22.384128608Z" level=info msg="Start snapshots syncer" Jan 14 01:06:22.385960 containerd[1661]: time="2026-01-14T01:06:22.385162378Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:06:22.388180 containerd[1661]: time="2026-01-14T01:06:22.387004317Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:06:22.388180 containerd[1661]: time="2026-01-14T01:06:22.387971693Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:06:22.390327 containerd[1661]: time="2026-01-14T01:06:22.388612580Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:06:22.390327 containerd[1661]: time="2026-01-14T01:06:22.389205667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:06:22.392414 containerd[1661]: time="2026-01-14T01:06:22.391993582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:06:22.392476 containerd[1661]: time="2026-01-14T01:06:22.392442500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:06:22.392476 containerd[1661]: time="2026-01-14T01:06:22.392463098Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:06:22.393202 containerd[1661]: time="2026-01-14T01:06:22.392607507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:06:22.393202 containerd[1661]: time="2026-01-14T01:06:22.393062087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:06:22.393202 containerd[1661]: time="2026-01-14T01:06:22.393082875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:06:22.393280 containerd[1661]: time="2026-01-14T01:06:22.393236832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393396460Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393592407Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393611081Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393620990Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393629957Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393912604Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393933113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:06:22.394115 containerd[1661]: time="2026-01-14T01:06:22.393950305Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:06:22.395072 containerd[1661]: time="2026-01-14T01:06:22.394565894Z" level=info msg="runtime interface created" Jan 14 01:06:22.396185 containerd[1661]: time="2026-01-14T01:06:22.395149634Z" level=info msg="created NRI interface" Jan 14 01:06:22.396185 containerd[1661]: time="2026-01-14T01:06:22.395316706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:06:22.396185 containerd[1661]: time="2026-01-14T01:06:22.395338346Z" level=info msg="Connect containerd service" Jan 14 01:06:22.396185 containerd[1661]: time="2026-01-14T01:06:22.395362101Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:06:22.407271 containerd[1661]: time="2026-01-14T01:06:22.406093204Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:06:22.712611 tar[1652]: linux-amd64/README.md Jan 14 01:06:22.765090 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:06:22.882290 containerd[1661]: time="2026-01-14T01:06:22.881517134Z" level=info msg="Start subscribing containerd event" Jan 14 01:06:22.882290 containerd[1661]: time="2026-01-14T01:06:22.881583979Z" level=info msg="Start recovering state" Jan 14 01:06:22.884205 containerd[1661]: time="2026-01-14T01:06:22.882075116Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:06:22.884258 containerd[1661]: time="2026-01-14T01:06:22.884215873Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:06:22.885147 containerd[1661]: time="2026-01-14T01:06:22.883591687Z" level=info msg="Start event monitor" Jan 14 01:06:22.885194 containerd[1661]: time="2026-01-14T01:06:22.885152912Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:06:22.885194 containerd[1661]: time="2026-01-14T01:06:22.885164663Z" level=info msg="Start streaming server" Jan 14 01:06:22.892530 containerd[1661]: time="2026-01-14T01:06:22.885305917Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:06:22.892530 containerd[1661]: time="2026-01-14T01:06:22.892084733Z" level=info msg="runtime interface starting up..." Jan 14 01:06:22.892530 containerd[1661]: time="2026-01-14T01:06:22.892231647Z" level=info msg="starting plugins..." Jan 14 01:06:22.894419 containerd[1661]: time="2026-01-14T01:06:22.894033862Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:06:22.894434 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:06:22.913263 containerd[1661]: time="2026-01-14T01:06:22.904136303Z" level=info msg="containerd successfully booted in 0.912515s" Jan 14 01:06:24.323570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:06:24.347245 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:06:24.372627 systemd[1]: Startup finished in 33.401s (kernel) + 29.011s (initrd) + 27.660s (userspace) = 1min 30.073s. Jan 14 01:06:24.380081 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:06:25.953409 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:06:25.957359 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:37104.service - OpenSSH per-connection server daemon (10.0.0.1:37104). Jan 14 01:06:26.305442 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 37104 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:06:26.316453 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:26.353376 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:06:26.358129 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:06:26.378393 systemd-logind[1635]: New session 1 of user core. Jan 14 01:06:26.420337 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:06:26.421304 kubelet[1749]: E0114 01:06:26.421131 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:06:26.434624 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:06:26.435462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:06:26.436377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:06:26.438259 systemd[1]: kubelet.service: Consumed 2.451s CPU time, 268.1M memory peak. Jan 14 01:06:26.499313 (systemd)[1769]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:26.521347 systemd-logind[1635]: New session 2 of user core. Jan 14 01:06:26.942604 systemd[1769]: Queued start job for default target default.target. Jan 14 01:06:26.963593 systemd[1769]: Created slice app.slice - User Application Slice. Jan 14 01:06:26.964192 systemd[1769]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:06:26.964213 systemd[1769]: Reached target paths.target - Paths. Jan 14 01:06:26.964292 systemd[1769]: Reached target timers.target - Timers. Jan 14 01:06:26.974577 systemd[1769]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:06:26.980190 systemd[1769]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:06:27.051371 systemd[1769]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:06:27.051629 systemd[1769]: Reached target sockets.target - Sockets. Jan 14 01:06:27.058620 systemd[1769]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:06:27.059300 systemd[1769]: Reached target basic.target - Basic System. Jan 14 01:06:27.059549 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:06:27.060290 systemd[1769]: Reached target default.target - Main User Target. Jan 14 01:06:27.060353 systemd[1769]: Startup finished in 508ms. Jan 14 01:06:27.079600 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:06:27.130324 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:37108.service - OpenSSH per-connection server daemon (10.0.0.1:37108). Jan 14 01:06:27.369132 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 37108 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:06:27.376283 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:27.401466 systemd-logind[1635]: New session 3 of user core. Jan 14 01:06:27.418626 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:06:27.487486 sshd[1788]: Connection closed by 10.0.0.1 port 37108 Jan 14 01:06:27.489394 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 14 01:06:27.512600 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:37108.service: Deactivated successfully. Jan 14 01:06:27.520547 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:06:27.528575 systemd-logind[1635]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:06:27.536530 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:37124.service - OpenSSH per-connection server daemon (10.0.0.1:37124). Jan 14 01:06:27.538265 systemd-logind[1635]: Removed session 3. Jan 14 01:06:27.710611 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 37124 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:06:27.717422 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:27.744518 systemd-logind[1635]: New session 4 of user core. Jan 14 01:06:27.759331 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:06:27.818313 sshd[1798]: Connection closed by 10.0.0.1 port 37124 Jan 14 01:06:27.819215 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 14 01:06:27.842604 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:37124.service: Deactivated successfully. Jan 14 01:06:27.850425 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:06:27.858186 systemd-logind[1635]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:06:27.867364 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:37126.service - OpenSSH per-connection server daemon (10.0.0.1:37126). Jan 14 01:06:27.874145 systemd-logind[1635]: Removed session 4. Jan 14 01:06:28.067051 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 37126 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:06:28.076293 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:28.101403 systemd-logind[1635]: New session 5 of user core. Jan 14 01:06:28.123387 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:06:28.208321 sshd[1808]: Connection closed by 10.0.0.1 port 37126 Jan 14 01:06:28.210530 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 14 01:06:28.226394 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:37126.service: Deactivated successfully. Jan 14 01:06:28.231630 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:06:28.239484 systemd-logind[1635]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:06:28.245320 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:37140.service - OpenSSH per-connection server daemon (10.0.0.1:37140). Jan 14 01:06:28.250181 systemd-logind[1635]: Removed session 5. Jan 14 01:06:28.434496 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 37140 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:06:28.440505 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:28.465502 systemd-logind[1635]: New session 6 of user core. Jan 14 01:06:28.482294 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:06:28.609329 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:06:28.610412 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:06:28.660290 sudo[1819]: pam_unix(sudo:session): session closed for user root Jan 14 01:06:28.666172 sshd[1818]: Connection closed by 10.0.0.1 port 37140 Jan 14 01:06:28.667196 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jan 14 01:06:28.687485 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:37140.service: Deactivated successfully. Jan 14 01:06:28.692235 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:06:28.695272 systemd-logind[1635]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:06:28.703366 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:37150.service - OpenSSH per-connection server daemon (10.0.0.1:37150). Jan 14 01:06:28.706151 systemd-logind[1635]: Removed session 6. Jan 14 01:06:28.884572 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 37150 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:06:28.888484 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:28.912320 systemd-logind[1635]: New session 7 of user core. Jan 14 01:06:28.929421 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:06:29.020520 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:06:29.022290 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:06:29.047336 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 14 01:06:29.084532 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:06:29.085520 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:06:29.127181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:06:29.358000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:06:29.363231 augenrules[1856]: No rules Jan 14 01:06:29.373432 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 14 01:06:29.373599 kernel: audit: type=1305 audit(1768352789.358:236): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:06:29.374465 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:06:29.376206 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:06:29.380565 sudo[1831]: pam_unix(sudo:session): session closed for user root Jan 14 01:06:29.386603 sshd[1830]: Connection closed by 10.0.0.1 port 37150 Jan 14 01:06:29.388533 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Jan 14 01:06:29.406339 kernel: audit: type=1300 audit(1768352789.358:236): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd7f8af160 a2=420 a3=0 items=0 ppid=1837 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:29.358000 audit[1856]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd7f8af160 a2=420 a3=0 items=0 ppid=1837 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:29.358000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:06:29.465189 kernel: audit: type=1327 audit(1768352789.358:236): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:06:29.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.535366 kernel: audit: type=1130 audit(1768352789.377:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.535492 kernel: audit: type=1131 audit(1768352789.377:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.580294 kernel: audit: type=1106 audit(1768352789.379:239): pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.379000 audit[1831]: USER_END pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.380000 audit[1831]: CRED_DISP pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.640525 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:37150.service: Deactivated successfully. Jan 14 01:06:29.646228 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:06:29.650330 systemd-logind[1635]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:06:29.661233 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:37154.service - OpenSSH per-connection server daemon (10.0.0.1:37154). Jan 14 01:06:29.663580 systemd-logind[1635]: Removed session 7. Jan 14 01:06:29.677618 kernel: audit: type=1104 audit(1768352789.380:240): pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.678252 kernel: audit: type=1106 audit(1768352789.393:241): pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:29.393000 audit[1826]: USER_END pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:29.746365 kernel: audit: type=1104 audit(1768352789.393:242): pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:29.393000 audit[1826]: CRED_DISP pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:29.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.95:22-10.0.0.1:37150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.95:22-10.0.0.1:37154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.835370 kernel: audit: type=1131 audit(1768352789.640:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.95:22-10.0.0.1:37150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:29.902000 audit[1865]: USER_ACCT pid=1865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:29.905590 sshd[1865]: Accepted publickey for core from 10.0.0.1 port 37154 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:06:29.908000 audit[1865]: CRED_ACQ pid=1865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:29.909000 audit[1865]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6e5a9c60 a2=3 a3=0 items=0 ppid=1 pid=1865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:29.909000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:06:29.911512 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:06:29.938373 systemd-logind[1635]: New session 8 of user core. Jan 14 01:06:29.959549 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:06:29.970000 audit[1865]: USER_START pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:29.980000 audit[1869]: CRED_ACQ pid=1869 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:06:30.032000 audit[1870]: USER_ACCT pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:06:30.034189 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:06:30.033000 audit[1870]: CRED_REFR pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:06:30.034631 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:06:30.034000 audit[1870]: USER_START pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:06:31.388534 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:06:31.425131 (dockerd)[1892]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:06:32.579306 dockerd[1892]: time="2026-01-14T01:06:32.578516718Z" level=info msg="Starting up" Jan 14 01:06:32.585430 dockerd[1892]: time="2026-01-14T01:06:32.584546631Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:06:32.685255 dockerd[1892]: time="2026-01-14T01:06:32.684477045Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:06:32.831323 systemd[1]: var-lib-docker-metacopy\x2dcheck733736038-merged.mount: Deactivated successfully. Jan 14 01:06:32.941621 dockerd[1892]: time="2026-01-14T01:06:32.941495981Z" level=info msg="Loading containers: start." Jan 14 01:06:33.008353 kernel: Initializing XFRM netlink socket Jan 14 01:06:33.561000 audit[1944]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.561000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdcc693810 a2=0 a3=0 items=0 ppid=1892 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:06:33.587000 audit[1946]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.587000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc78ed5db0 a2=0 a3=0 items=0 ppid=1892 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.587000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:06:33.616000 audit[1948]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.616000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefdbda610 a2=0 a3=0 items=0 ppid=1892 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:06:33.651000 audit[1950]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.651000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd801eeaf0 a2=0 a3=0 items=0 ppid=1892 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.651000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:06:33.681000 audit[1952]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.681000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf614e310 a2=0 a3=0 items=0 ppid=1892 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:06:33.708000 audit[1954]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.708000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffac06cef0 a2=0 a3=0 items=0 ppid=1892 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.708000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:06:33.738000 audit[1956]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.738000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd5feba5b0 a2=0 a3=0 items=0 ppid=1892 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:06:33.769000 audit[1958]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.769000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd16d5f3c0 a2=0 a3=0 items=0 ppid=1892 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:06:33.952000 audit[1961]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.952000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc811d7bb0 a2=0 a3=0 items=0 ppid=1892 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:06:33.981000 audit[1963]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:33.981000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffbc446620 a2=0 a3=0 items=0 ppid=1892 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:33.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:06:34.009000 audit[1965]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:34.009000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffef9462ba0 a2=0 a3=0 items=0 ppid=1892 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.009000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:06:34.033000 audit[1967]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:34.033000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd6ba1f4b0 a2=0 a3=0 items=0 ppid=1892 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:06:34.065000 audit[1969]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:34.065000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcda26dd60 a2=0 a3=0 items=0 ppid=1892 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.065000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:06:34.529000 audit[1999]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.543491 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 14 01:06:34.543588 kernel: audit: type=1325 audit(1768352794.529:266): table=nat:15 family=10 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.529000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffee98fec70 a2=0 a3=0 items=0 ppid=1892 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.636406 kernel: audit: type=1300 audit(1768352794.529:266): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffee98fec70 a2=0 a3=0 items=0 ppid=1892 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.529000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:06:34.664260 kernel: audit: type=1327 audit(1768352794.529:266): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:06:34.664338 kernel: audit: type=1325 audit(1768352794.562:267): table=filter:16 family=10 entries=2 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.562000 audit[2001]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.695041 kernel: audit: type=1300 audit(1768352794.562:267): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc71ba7990 a2=0 a3=0 items=0 ppid=1892 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.562000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc71ba7990 a2=0 a3=0 items=0 ppid=1892 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.562000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:06:34.781330 kernel: audit: type=1327 audit(1768352794.562:267): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:06:34.781563 kernel: audit: type=1325 audit(1768352794.593:268): table=filter:17 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.593000 audit[2003]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.813244 kernel: audit: type=1300 audit(1768352794.593:268): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde091e4a0 a2=0 a3=0 items=0 ppid=1892 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.593000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde091e4a0 a2=0 a3=0 items=0 ppid=1892 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:06:34.911423 kernel: audit: type=1327 audit(1768352794.593:268): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:06:34.911515 kernel: audit: type=1325 audit(1768352794.619:269): table=filter:18 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.619000 audit[2005]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.619000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4dc282a0 a2=0 a3=0 items=0 ppid=1892 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:06:34.643000 audit[2007]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.643000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe545d5e10 a2=0 a3=0 items=0 ppid=1892 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:06:34.668000 audit[2009]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.668000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffc7b65250 a2=0 a3=0 items=0 ppid=1892 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:06:34.689000 audit[2011]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.689000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd6e5e0000 a2=0 a3=0 items=0 ppid=1892 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:06:34.714000 audit[2013]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.714000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffb0478560 a2=0 a3=0 items=0 ppid=1892 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:06:34.902000 audit[2015]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.902000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe478a5900 a2=0 a3=0 items=0 ppid=1892 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:06:34.926000 audit[2017]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.926000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcfa0e3820 a2=0 a3=0 items=0 ppid=1892 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:06:34.955000 audit[2019]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.955000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd0ce145d0 a2=0 a3=0 items=0 ppid=1892 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:06:34.982000 audit[2021]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:34.982000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe6798e650 a2=0 a3=0 items=0 ppid=1892 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:34.982000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:06:35.007000 audit[2023]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:35.007000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe5282e370 a2=0 a3=0 items=0 ppid=1892 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:06:35.066000 audit[2028]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.066000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd0f8413d0 a2=0 a3=0 items=0 ppid=1892 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.066000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:06:35.094000 audit[2030]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.094000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe58cfe3b0 a2=0 a3=0 items=0 ppid=1892 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:06:35.125000 audit[2032]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.125000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd32b55690 a2=0 a3=0 items=0 ppid=1892 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:06:35.155000 audit[2034]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:35.155000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff8587bd30 a2=0 a3=0 items=0 ppid=1892 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:06:35.182000 audit[2036]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:35.182000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcfe23f180 a2=0 a3=0 items=0 ppid=1892 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:06:35.207000 audit[2038]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:06:35.207000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff791c8e90 a2=0 a3=0 items=0 ppid=1892 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:06:35.303000 audit[2043]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.303000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffca6b59aa0 a2=0 a3=0 items=0 ppid=1892 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:06:35.333000 audit[2045]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.333000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe2ed416c0 a2=0 a3=0 items=0 ppid=1892 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:06:35.448000 audit[2053]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.448000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff408db350 a2=0 a3=0 items=0 ppid=1892 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:06:35.554000 audit[2059]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.554000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd8034ab30 a2=0 a3=0 items=0 ppid=1892 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.554000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:06:35.585000 audit[2061]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.585000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd479c94a0 a2=0 a3=0 items=0 ppid=1892 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:06:35.619000 audit[2063]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.619000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd03b45b70 a2=0 a3=0 items=0 ppid=1892 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:06:35.649000 audit[2065]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.649000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff1df7e0a0 a2=0 a3=0 items=0 ppid=1892 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:06:35.681000 audit[2067]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:06:35.681000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd21ae0f70 a2=0 a3=0 items=0 ppid=1892 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:06:35.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:06:35.685410 systemd-networkd[1424]: docker0: Link UP Jan 14 01:06:35.741245 dockerd[1892]: time="2026-01-14T01:06:35.740350288Z" level=info msg="Loading containers: done." Jan 14 01:06:35.845626 dockerd[1892]: time="2026-01-14T01:06:35.845319302Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:06:35.845626 dockerd[1892]: time="2026-01-14T01:06:35.845399652Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:06:35.845626 dockerd[1892]: time="2026-01-14T01:06:35.845494538Z" level=info msg="Initializing buildkit" Jan 14 01:06:36.179575 dockerd[1892]: time="2026-01-14T01:06:36.178547207Z" level=info msg="Completed buildkit initialization" Jan 14 01:06:36.199308 dockerd[1892]: time="2026-01-14T01:06:36.197572517Z" level=info msg="Daemon has completed initialization" Jan 14 01:06:36.199308 dockerd[1892]: time="2026-01-14T01:06:36.198443545Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:06:36.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:36.201359 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:06:36.471600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:06:36.481396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:06:37.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:37.273179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:06:37.317397 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:06:37.688597 kubelet[2118]: E0114 01:06:37.688190 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:06:37.703272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:06:37.704326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:06:37.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:06:37.706541 systemd[1]: kubelet.service: Consumed 867ms CPU time, 111.3M memory peak. Jan 14 01:06:38.980218 containerd[1661]: time="2026-01-14T01:06:38.978547381Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:06:41.035395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562224715.mount: Deactivated successfully. Jan 14 01:06:47.734293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:06:47.797302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:06:51.258314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:06:51.314576 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 14 01:06:51.340017 kernel: audit: type=1130 audit(1768352811.278:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:51.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:52.273586 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:06:54.256570 kubelet[2195]: E0114 01:06:54.213456 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:06:54.413606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:06:54.414225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:06:54.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:06:54.429312 systemd[1]: kubelet.service: Consumed 3.745s CPU time, 109.6M memory peak. Jan 14 01:06:54.477964 kernel: audit: type=1131 audit(1768352814.428:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:06:57.527448 containerd[1661]: time="2026-01-14T01:06:57.523329746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:57.536973 containerd[1661]: time="2026-01-14T01:06:57.535263297Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=29442106" Jan 14 01:06:57.543401 containerd[1661]: time="2026-01-14T01:06:57.542526153Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:57.566262 containerd[1661]: time="2026-01-14T01:06:57.566202461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:06:57.570476 containerd[1661]: time="2026-01-14T01:06:57.570374455Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 18.59177149s" Jan 14 01:06:57.570476 containerd[1661]: time="2026-01-14T01:06:57.570423124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 01:06:57.590984 containerd[1661]: time="2026-01-14T01:06:57.590521743Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:07:04.473440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:07:04.491007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:06.514117 update_engine[1637]: I20260114 01:07:06.498322 1637 update_attempter.cc:509] Updating boot flags... Jan 14 01:07:06.535939 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1628371129 wd_nsec: 1628371034 Jan 14 01:07:06.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.887405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:06.938347 kernel: audit: type=1130 audit(1768352826.886:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.968923 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:09.098186 kubelet[2228]: E0114 01:07:09.094217 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:09.110631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:09.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:09.113066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:09.114375 systemd[1]: kubelet.service: Consumed 3.509s CPU time, 112.3M memory peak. Jan 14 01:07:09.171357 kernel: audit: type=1131 audit(1768352829.111:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:14.574351 containerd[1661]: time="2026-01-14T01:07:14.573354562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:14.584535 containerd[1661]: time="2026-01-14T01:07:14.584096095Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 01:07:14.591291 containerd[1661]: time="2026-01-14T01:07:14.591237638Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:14.629326 containerd[1661]: time="2026-01-14T01:07:14.628608059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:14.639030 containerd[1661]: time="2026-01-14T01:07:14.637972947Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 17.046892025s" Jan 14 01:07:14.639030 containerd[1661]: time="2026-01-14T01:07:14.638155826Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 01:07:14.645123 containerd[1661]: time="2026-01-14T01:07:14.644020545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:07:19.235966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 01:07:19.246119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:20.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:20.331199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:20.383276 kernel: audit: type=1130 audit(1768352840.331:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:20.395153 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:20.815163 kubelet[2254]: E0114 01:07:20.814585 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:20.827025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:20.827275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:20.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:20.829047 systemd[1]: kubelet.service: Consumed 1.169s CPU time, 108.2M memory peak. Jan 14 01:07:20.887973 kernel: audit: type=1131 audit(1768352840.827:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:21.743325 containerd[1661]: time="2026-01-14T01:07:21.742530575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:21.746153 containerd[1661]: time="2026-01-14T01:07:21.746009632Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 01:07:21.751926 containerd[1661]: time="2026-01-14T01:07:21.751556186Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:21.764153 containerd[1661]: time="2026-01-14T01:07:21.763581664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:21.767859 containerd[1661]: time="2026-01-14T01:07:21.767586897Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 7.123528892s" Jan 14 01:07:21.768287 containerd[1661]: time="2026-01-14T01:07:21.767625689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 01:07:21.770996 containerd[1661]: time="2026-01-14T01:07:21.770279472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:07:27.930304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056170439.mount: Deactivated successfully. Jan 14 01:07:30.972313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 01:07:31.001196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:32.361561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:32.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:32.420206 kernel: audit: type=1130 audit(1768352852.364:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:32.441245 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:32.708255 kubelet[2279]: E0114 01:07:32.707919 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:32.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:32.714956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:32.715214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:32.716057 systemd[1]: kubelet.service: Consumed 1.194s CPU time, 110.4M memory peak. Jan 14 01:07:32.746925 kernel: audit: type=1131 audit(1768352852.714:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:33.174606 containerd[1661]: time="2026-01-14T01:07:33.174545365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:33.176998 containerd[1661]: time="2026-01-14T01:07:33.176929382Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 14 01:07:33.180163 containerd[1661]: time="2026-01-14T01:07:33.180084409Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:33.184322 containerd[1661]: time="2026-01-14T01:07:33.184257955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:33.185029 containerd[1661]: time="2026-01-14T01:07:33.184985769Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 11.414659721s" Jan 14 01:07:33.185185 containerd[1661]: time="2026-01-14T01:07:33.185156858Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 01:07:33.188822 containerd[1661]: time="2026-01-14T01:07:33.188611207Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:07:33.903869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284745368.mount: Deactivated successfully. Jan 14 01:07:36.986315 containerd[1661]: time="2026-01-14T01:07:36.986177950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:36.991003 containerd[1661]: time="2026-01-14T01:07:36.990960754Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128849" Jan 14 01:07:36.995982 containerd[1661]: time="2026-01-14T01:07:36.995945105Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:37.002127 containerd[1661]: time="2026-01-14T01:07:37.002086873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:37.003513 containerd[1661]: time="2026-01-14T01:07:37.003477464Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.814566047s" Jan 14 01:07:37.003844 containerd[1661]: time="2026-01-14T01:07:37.003632113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 01:07:37.014602 containerd[1661]: time="2026-01-14T01:07:37.009271419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:07:38.522118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578662529.mount: Deactivated successfully. Jan 14 01:07:38.562267 containerd[1661]: time="2026-01-14T01:07:38.561316184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:07:38.567088 containerd[1661]: time="2026-01-14T01:07:38.566953497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:07:38.572346 containerd[1661]: time="2026-01-14T01:07:38.572152285Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:07:38.610491 containerd[1661]: time="2026-01-14T01:07:38.609497601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:07:38.611531 containerd[1661]: time="2026-01-14T01:07:38.611201104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.598726612s" Jan 14 01:07:38.611531 containerd[1661]: time="2026-01-14T01:07:38.611241148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:07:38.616937 containerd[1661]: time="2026-01-14T01:07:38.616259475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:07:40.528609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202574251.mount: Deactivated successfully. Jan 14 01:07:42.863138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 01:07:42.874056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:44.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:44.251004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:44.279843 kernel: audit: type=1130 audit(1768352864.249:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:44.341196 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:47.659331 kubelet[2364]: E0114 01:07:47.658940 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:47.670880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:47.671235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:47.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:47.672978 systemd[1]: kubelet.service: Consumed 6.067s CPU time, 109.9M memory peak. Jan 14 01:07:47.714972 kernel: audit: type=1131 audit(1768352867.671:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:57.721152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 01:07:57.728999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:57.735580 containerd[1661]: time="2026-01-14T01:07:57.735534322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:57.740392 containerd[1661]: time="2026-01-14T01:07:57.740356027Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57081318" Jan 14 01:07:57.744350 containerd[1661]: time="2026-01-14T01:07:57.744314344Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:57.756130 containerd[1661]: time="2026-01-14T01:07:57.754976162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:57.756130 containerd[1661]: time="2026-01-14T01:07:57.755354194Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 19.126280152s" Jan 14 01:07:57.756130 containerd[1661]: time="2026-01-14T01:07:57.755393648Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 01:07:58.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:58.143888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:58.176869 kernel: audit: type=1130 audit(1768352878.144:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:58.204090 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:58.564366 kubelet[2432]: E0114 01:07:58.564097 2432 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:58.575337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:58.576021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:58.579058 systemd[1]: kubelet.service: Consumed 750ms CPU time, 108.9M memory peak. Jan 14 01:07:58.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:58.618175 kernel: audit: type=1131 audit(1768352878.577:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:08:03.167522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:03.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:03.168004 systemd[1]: kubelet.service: Consumed 750ms CPU time, 108.9M memory peak. Jan 14 01:08:03.173391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:08:03.201090 kernel: audit: type=1130 audit(1768352883.167:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:03.201226 kernel: audit: type=1131 audit(1768352883.167:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:03.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:03.268993 systemd[1]: Reload requested from client PID 2463 ('systemctl') (unit session-8.scope)... Jan 14 01:08:03.269018 systemd[1]: Reloading... Jan 14 01:08:03.509983 zram_generator::config[2510]: No configuration found. Jan 14 01:08:04.086903 systemd[1]: Reloading finished in 816 ms. Jan 14 01:08:04.160000 audit: BPF prog-id=66 op=LOAD Jan 14 01:08:04.162000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:08:04.184840 kernel: audit: type=1334 audit(1768352884.160:310): prog-id=66 op=LOAD Jan 14 01:08:04.184912 kernel: audit: type=1334 audit(1768352884.162:311): prog-id=51 op=UNLOAD Jan 14 01:08:04.184954 kernel: audit: type=1334 audit(1768352884.165:312): prog-id=67 op=LOAD Jan 14 01:08:04.165000 audit: BPF prog-id=67 op=LOAD Jan 14 01:08:04.195287 kernel: audit: type=1334 audit(1768352884.165:313): prog-id=59 op=UNLOAD Jan 14 01:08:04.165000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:08:04.206163 kernel: audit: type=1334 audit(1768352884.165:314): prog-id=68 op=LOAD Jan 14 01:08:04.165000 audit: BPF prog-id=68 op=LOAD Jan 14 01:08:04.215951 kernel: audit: type=1334 audit(1768352884.165:315): prog-id=69 op=LOAD Jan 14 01:08:04.165000 audit: BPF prog-id=69 op=LOAD Jan 14 01:08:04.165000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:08:04.232866 kernel: audit: type=1334 audit(1768352884.165:316): prog-id=60 op=UNLOAD Jan 14 01:08:04.166000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:08:04.253518 kernel: audit: type=1334 audit(1768352884.166:317): prog-id=61 op=UNLOAD Jan 14 01:08:04.167000 audit: BPF prog-id=70 op=LOAD Jan 14 01:08:04.167000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:08:04.168000 audit: BPF prog-id=71 op=LOAD Jan 14 01:08:04.168000 audit: BPF prog-id=72 op=LOAD Jan 14 01:08:04.168000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:08:04.168000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:08:04.174000 audit: BPF prog-id=73 op=LOAD Jan 14 01:08:04.174000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:08:04.174000 audit: BPF prog-id=74 op=LOAD Jan 14 01:08:04.175000 audit: BPF prog-id=75 op=LOAD Jan 14 01:08:04.175000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:08:04.175000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:08:04.178000 audit: BPF prog-id=76 op=LOAD Jan 14 01:08:04.178000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:08:04.178000 audit: BPF prog-id=77 op=LOAD Jan 14 01:08:04.178000 audit: BPF prog-id=78 op=LOAD Jan 14 01:08:04.178000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:08:04.178000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:08:04.179000 audit: BPF prog-id=79 op=LOAD Jan 14 01:08:04.179000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:08:04.179000 audit: BPF prog-id=80 op=LOAD Jan 14 01:08:04.180000 audit: BPF prog-id=81 op=LOAD Jan 14 01:08:04.180000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:08:04.180000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:08:04.180000 audit: BPF prog-id=82 op=LOAD Jan 14 01:08:04.180000 audit: BPF prog-id=83 op=LOAD Jan 14 01:08:04.181000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:08:04.181000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:08:04.183000 audit: BPF prog-id=84 op=LOAD Jan 14 01:08:04.183000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:08:04.186000 audit: BPF prog-id=85 op=LOAD Jan 14 01:08:04.259000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:08:04.318010 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:08:04.318269 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:08:04.319387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:04.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:08:04.320273 systemd[1]: kubelet.service: Consumed 271ms CPU time, 98.5M memory peak. Jan 14 01:08:04.327118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:08:04.771052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:04.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:04.813568 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:08:05.020052 kubelet[2557]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:05.020052 kubelet[2557]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:08:05.020052 kubelet[2557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:05.020052 kubelet[2557]: I0114 01:08:05.019575 2557 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:08:06.057958 kubelet[2557]: I0114 01:08:06.057628 2557 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:08:06.057958 kubelet[2557]: I0114 01:08:06.057938 2557 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:08:06.058581 kubelet[2557]: I0114 01:08:06.058156 2557 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:08:06.123627 kubelet[2557]: E0114 01:08:06.123349 2557 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:08:06.127847 kubelet[2557]: I0114 01:08:06.126863 2557 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:08:06.153954 kubelet[2557]: I0114 01:08:06.153811 2557 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:08:06.177383 kubelet[2557]: I0114 01:08:06.177036 2557 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:08:06.178062 kubelet[2557]: I0114 01:08:06.177839 2557 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:08:06.178545 kubelet[2557]: I0114 01:08:06.177970 2557 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:08:06.178545 kubelet[2557]: I0114 01:08:06.178190 2557 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:08:06.178545 kubelet[2557]: I0114 01:08:06.178204 2557 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:08:06.180926 kubelet[2557]: I0114 01:08:06.180311 2557 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:06.188813 kubelet[2557]: I0114 01:08:06.188552 2557 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:08:06.189249 kubelet[2557]: I0114 01:08:06.189069 2557 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:08:06.189249 kubelet[2557]: I0114 01:08:06.189107 2557 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:08:06.189249 kubelet[2557]: I0114 01:08:06.189119 2557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:08:06.198299 kubelet[2557]: E0114 01:08:06.198132 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:08:06.201892 kubelet[2557]: E0114 01:08:06.201577 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:08:06.203146 kubelet[2557]: I0114 01:08:06.203004 2557 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:08:06.204185 kubelet[2557]: I0114 01:08:06.204126 2557 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:08:06.210319 kubelet[2557]: W0114 01:08:06.209943 2557 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:08:06.228333 kubelet[2557]: I0114 01:08:06.227382 2557 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:08:06.228333 kubelet[2557]: I0114 01:08:06.227993 2557 server.go:1289] "Started kubelet" Jan 14 01:08:06.230877 kubelet[2557]: I0114 01:08:06.228621 2557 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:08:06.239525 kubelet[2557]: I0114 01:08:06.239386 2557 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:08:06.243912 kubelet[2557]: E0114 01:08:06.241127 2557 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a7394b5275878 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 01:08:06.227949688 +0000 UTC m=+1.391811780,LastTimestamp:2026-01-14 01:08:06.227949688 +0000 UTC m=+1.391811780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 01:08:06.244195 kubelet[2557]: I0114 01:08:06.243936 2557 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:08:06.247213 kubelet[2557]: I0114 01:08:06.246903 2557 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:08:06.252069 kubelet[2557]: I0114 01:08:06.251900 2557 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:08:06.253001 kubelet[2557]: I0114 01:08:06.252976 2557 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:08:06.253985 kubelet[2557]: I0114 01:08:06.253261 2557 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:08:06.255332 kubelet[2557]: E0114 01:08:06.255305 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:08:06.258938 kubelet[2557]: E0114 01:08:06.258138 2557 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:08:06.258938 kubelet[2557]: E0114 01:08:06.258599 2557 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Jan 14 01:08:06.259067 kubelet[2557]: I0114 01:08:06.253272 2557 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:08:06.259275 kubelet[2557]: I0114 01:08:06.259254 2557 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:08:06.259353 kubelet[2557]: E0114 01:08:06.259329 2557 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:08:06.262279 kubelet[2557]: I0114 01:08:06.262008 2557 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:08:06.262279 kubelet[2557]: I0114 01:08:06.262129 2557 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:08:06.262279 kubelet[2557]: I0114 01:08:06.262266 2557 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:08:06.303000 audit[2579]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.303000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcefbf21f0 a2=0 a3=0 items=0 ppid=2557 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:08:06.313000 audit[2581]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.313000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf2a6baf0 a2=0 a3=0 items=0 ppid=2557 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:08:06.325514 kubelet[2557]: I0114 01:08:06.324878 2557 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:08:06.325514 kubelet[2557]: I0114 01:08:06.325003 2557 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:08:06.325514 kubelet[2557]: I0114 01:08:06.325026 2557 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:06.342000 audit[2584]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.342000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc22ab6120 a2=0 a3=0 items=0 ppid=2557 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:06.360275 kubelet[2557]: E0114 01:08:06.358877 2557 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:08:06.369000 audit[2586]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.369000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe798d07f0 a2=0 a3=0 items=0 ppid=2557 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:06.426312 kubelet[2557]: I0114 01:08:06.426046 2557 policy_none.go:49] "None policy: Start" Jan 14 01:08:06.426312 kubelet[2557]: I0114 01:08:06.426181 2557 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:08:06.426312 kubelet[2557]: I0114 01:08:06.426199 2557 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:08:06.461576 kubelet[2557]: E0114 01:08:06.461538 2557 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:08:06.462327 kubelet[2557]: E0114 01:08:06.462301 2557 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Jan 14 01:08:06.470518 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:08:06.475000 audit[2589]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.475000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffde7b63470 a2=0 a3=0 items=0 ppid=2557 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:08:06.480881 kubelet[2557]: I0114 01:08:06.479573 2557 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:08:06.488000 audit[2592]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.488000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc729669f0 a2=0 a3=0 items=0 ppid=2557 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:08:06.495000 audit[2591]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:06.495000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd2859f980 a2=0 a3=0 items=0 ppid=2557 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:08:06.499604 kubelet[2557]: I0114 01:08:06.498882 2557 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:08:06.499604 kubelet[2557]: I0114 01:08:06.499313 2557 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:08:06.499604 kubelet[2557]: I0114 01:08:06.499346 2557 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:08:06.499604 kubelet[2557]: I0114 01:08:06.499358 2557 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:08:06.499604 kubelet[2557]: E0114 01:08:06.499540 2557 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:08:06.501506 kubelet[2557]: E0114 01:08:06.501241 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:08:06.504602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:08:06.506000 audit[2593]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.506000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb79e42f0 a2=0 a3=0 items=0 ppid=2557 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:08:06.517000 audit[2594]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:06.517325 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:08:06.517000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb5ec6ee0 a2=0 a3=0 items=0 ppid=2557 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:08:06.524000 audit[2595]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:06.524000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc284a4240 a2=0 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.524000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:08:06.533000 audit[2596]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:06.533000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc000dc40 a2=0 a3=0 items=0 ppid=2557 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:08:06.542218 kubelet[2557]: E0114 01:08:06.542183 2557 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:08:06.542000 audit[2597]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:06.542000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4c56d1e0 a2=0 a3=0 items=0 ppid=2557 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:06.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:08:06.547239 kubelet[2557]: I0114 01:08:06.546847 2557 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:08:06.547239 kubelet[2557]: I0114 01:08:06.546866 2557 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:08:06.548208 kubelet[2557]: I0114 01:08:06.547538 2557 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:08:06.552004 kubelet[2557]: E0114 01:08:06.551199 2557 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:08:06.552004 kubelet[2557]: E0114 01:08:06.551333 2557 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 01:08:06.655320 kubelet[2557]: I0114 01:08:06.651542 2557 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:06.660140 kubelet[2557]: E0114 01:08:06.659980 2557 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jan 14 01:08:06.670943 kubelet[2557]: I0114 01:08:06.669981 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1dc4ae38394d7c6112a3d8945eb0f83-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1dc4ae38394d7c6112a3d8945eb0f83\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:06.671129 kubelet[2557]: I0114 01:08:06.671105 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1dc4ae38394d7c6112a3d8945eb0f83-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1dc4ae38394d7c6112a3d8945eb0f83\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:06.671222 kubelet[2557]: I0114 01:08:06.671202 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:06.671343 kubelet[2557]: I0114 01:08:06.671323 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:06.671548 kubelet[2557]: I0114 01:08:06.671528 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:06.671934 kubelet[2557]: I0114 01:08:06.671619 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:06.672033 kubelet[2557]: I0114 01:08:06.672014 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:06.672585 kubelet[2557]: I0114 01:08:06.672561 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:06.673178 kubelet[2557]: I0114 01:08:06.673005 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1dc4ae38394d7c6112a3d8945eb0f83-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1dc4ae38394d7c6112a3d8945eb0f83\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:06.676342 systemd[1]: Created slice kubepods-burstable-poda1dc4ae38394d7c6112a3d8945eb0f83.slice - libcontainer container kubepods-burstable-poda1dc4ae38394d7c6112a3d8945eb0f83.slice. Jan 14 01:08:06.706533 kubelet[2557]: E0114 01:08:06.706367 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:06.718317 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 14 01:08:06.741374 kubelet[2557]: E0114 01:08:06.741345 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:06.749119 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 14 01:08:06.767573 kubelet[2557]: E0114 01:08:06.767008 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:06.865970 kubelet[2557]: E0114 01:08:06.865625 2557 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Jan 14 01:08:06.869259 kubelet[2557]: I0114 01:08:06.867907 2557 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:06.873033 kubelet[2557]: E0114 01:08:06.870230 2557 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jan 14 01:08:07.016837 kubelet[2557]: E0114 01:08:07.010533 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:07.022073 containerd[1661]: time="2026-01-14T01:08:07.020036393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1dc4ae38394d7c6112a3d8945eb0f83,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:07.049535 kubelet[2557]: E0114 01:08:07.044066 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:07.050156 containerd[1661]: time="2026-01-14T01:08:07.049382332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:07.073299 kubelet[2557]: E0114 01:08:07.073131 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:07.075282 containerd[1661]: time="2026-01-14T01:08:07.075188008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:07.283059 kubelet[2557]: I0114 01:08:07.282132 2557 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:07.286299 kubelet[2557]: E0114 01:08:07.286059 2557 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jan 14 01:08:07.297891 containerd[1661]: time="2026-01-14T01:08:07.297075288Z" level=info msg="connecting to shim e610b9d7c56df041c0d096790af1c64749b2d56528f5fc99bd2c85787677508d" address="unix:///run/containerd/s/cbf1213653b5cc3fc44c7640c791c6dc092715475dcda4d038cdb5e0c19073b8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:07.318787 containerd[1661]: time="2026-01-14T01:08:07.317999177Z" level=info msg="connecting to shim c99ab36a200d1184f7e4dbb489baade21e8a1fd4fe986b7bd01481fe3a23ba50" address="unix:///run/containerd/s/eddfd64b9b5f104c6eec754c79e3df60ab419983723696204ccfffcfb6c56155" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:07.319934 containerd[1661]: time="2026-01-14T01:08:07.319078592Z" level=info msg="connecting to shim dcd37e63c03a3fe57325258448ed063876f920177189daca5efa18667bee2b98" address="unix:///run/containerd/s/10ec31d28531ee3aca38bd41c0eaaed5bd9c10f8bf3c65e846469164a2012503" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:07.346266 kubelet[2557]: E0114 01:08:07.344354 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:08:07.443007 systemd[1]: Started cri-containerd-dcd37e63c03a3fe57325258448ed063876f920177189daca5efa18667bee2b98.scope - libcontainer container dcd37e63c03a3fe57325258448ed063876f920177189daca5efa18667bee2b98. Jan 14 01:08:07.459965 kubelet[2557]: E0114 01:08:07.459268 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:08:07.524320 systemd[1]: Started cri-containerd-c99ab36a200d1184f7e4dbb489baade21e8a1fd4fe986b7bd01481fe3a23ba50.scope - libcontainer container c99ab36a200d1184f7e4dbb489baade21e8a1fd4fe986b7bd01481fe3a23ba50. Jan 14 01:08:07.536598 systemd[1]: Started cri-containerd-e610b9d7c56df041c0d096790af1c64749b2d56528f5fc99bd2c85787677508d.scope - libcontainer container e610b9d7c56df041c0d096790af1c64749b2d56528f5fc99bd2c85787677508d. Jan 14 01:08:07.537872 kubelet[2557]: E0114 01:08:07.536920 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:08:07.543000 audit: BPF prog-id=86 op=LOAD Jan 14 01:08:07.548000 audit: BPF prog-id=87 op=LOAD Jan 14 01:08:07.548000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2628 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463643337653633633033613366653537333235323538343438656430 Jan 14 01:08:07.549000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:08:07.549000 audit[2655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463643337653633633033613366653537333235323538343438656430 Jan 14 01:08:07.550000 audit: BPF prog-id=88 op=LOAD Jan 14 01:08:07.550000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2628 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463643337653633633033613366653537333235323538343438656430 Jan 14 01:08:07.553000 audit: BPF prog-id=89 op=LOAD Jan 14 01:08:07.553000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2628 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463643337653633633033613366653537333235323538343438656430 Jan 14 01:08:07.553000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:08:07.553000 audit[2655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463643337653633633033613366653537333235323538343438656430 Jan 14 01:08:07.554000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:08:07.554000 audit[2655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463643337653633633033613366653537333235323538343438656430 Jan 14 01:08:07.554000 audit: BPF prog-id=90 op=LOAD Jan 14 01:08:07.554000 audit[2655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2628 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463643337653633633033613366653537333235323538343438656430 Jan 14 01:08:07.606000 audit: BPF prog-id=91 op=LOAD Jan 14 01:08:07.606000 audit: BPF prog-id=92 op=LOAD Jan 14 01:08:07.606000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2626 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396162333661323030643131383466376534646262343839626161 Jan 14 01:08:07.606000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:08:07.606000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396162333661323030643131383466376534646262343839626161 Jan 14 01:08:07.611000 audit: BPF prog-id=93 op=LOAD Jan 14 01:08:07.611000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2626 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396162333661323030643131383466376534646262343839626161 Jan 14 01:08:07.611000 audit: BPF prog-id=94 op=LOAD Jan 14 01:08:07.611000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2626 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396162333661323030643131383466376534646262343839626161 Jan 14 01:08:07.613000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:08:07.613000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396162333661323030643131383466376534646262343839626161 Jan 14 01:08:07.613000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:08:07.613000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396162333661323030643131383466376534646262343839626161 Jan 14 01:08:07.614000 audit: BPF prog-id=95 op=LOAD Jan 14 01:08:07.614000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2626 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396162333661323030643131383466376534646262343839626161 Jan 14 01:08:07.645000 audit: BPF prog-id=96 op=LOAD Jan 14 01:08:07.657000 audit: BPF prog-id=97 op=LOAD Jan 14 01:08:07.657000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2624 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313062396437633536646630343163306430393637393061663163 Jan 14 01:08:07.657000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:08:07.657000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2624 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313062396437633536646630343163306430393637393061663163 Jan 14 01:08:07.661000 audit: BPF prog-id=98 op=LOAD Jan 14 01:08:07.661000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2624 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313062396437633536646630343163306430393637393061663163 Jan 14 01:08:07.661000 audit: BPF prog-id=99 op=LOAD Jan 14 01:08:07.661000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2624 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313062396437633536646630343163306430393637393061663163 Jan 14 01:08:07.661000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:08:07.661000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2624 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313062396437633536646630343163306430393637393061663163 Jan 14 01:08:07.662000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:08:07.662000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2624 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313062396437633536646630343163306430393637393061663163 Jan 14 01:08:07.662000 audit: BPF prog-id=100 op=LOAD Jan 14 01:08:07.662000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2624 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:07.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313062396437633536646630343163306430393637393061663163 Jan 14 01:08:07.666577 kubelet[2557]: E0114 01:08:07.666520 2557 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Jan 14 01:08:07.740849 containerd[1661]: time="2026-01-14T01:08:07.740143675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1dc4ae38394d7c6112a3d8945eb0f83,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcd37e63c03a3fe57325258448ed063876f920177189daca5efa18667bee2b98\"" Jan 14 01:08:07.746835 kubelet[2557]: E0114 01:08:07.746099 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:07.785582 containerd[1661]: time="2026-01-14T01:08:07.785536618Z" level=info msg="CreateContainer within sandbox \"dcd37e63c03a3fe57325258448ed063876f920177189daca5efa18667bee2b98\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:08:07.791624 kubelet[2557]: E0114 01:08:07.790620 2557 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:08:07.807342 containerd[1661]: time="2026-01-14T01:08:07.807104722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c99ab36a200d1184f7e4dbb489baade21e8a1fd4fe986b7bd01481fe3a23ba50\"" Jan 14 01:08:07.817510 kubelet[2557]: E0114 01:08:07.817056 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:07.847773 containerd[1661]: time="2026-01-14T01:08:07.847245475Z" level=info msg="Container 24cfa55af8193bff424c25a7df3aab89eb368a9f21c17d79b0c9e377c5ee86b8: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:07.849880 containerd[1661]: time="2026-01-14T01:08:07.849306674Z" level=info msg="CreateContainer within sandbox \"c99ab36a200d1184f7e4dbb489baade21e8a1fd4fe986b7bd01481fe3a23ba50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:08:07.856985 containerd[1661]: time="2026-01-14T01:08:07.856287261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e610b9d7c56df041c0d096790af1c64749b2d56528f5fc99bd2c85787677508d\"" Jan 14 01:08:07.863256 kubelet[2557]: E0114 01:08:07.862549 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:07.884894 containerd[1661]: time="2026-01-14T01:08:07.884355943Z" level=info msg="CreateContainer within sandbox \"e610b9d7c56df041c0d096790af1c64749b2d56528f5fc99bd2c85787677508d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:08:07.891304 containerd[1661]: time="2026-01-14T01:08:07.891153393Z" level=info msg="CreateContainer within sandbox \"dcd37e63c03a3fe57325258448ed063876f920177189daca5efa18667bee2b98\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24cfa55af8193bff424c25a7df3aab89eb368a9f21c17d79b0c9e377c5ee86b8\"" Jan 14 01:08:07.898202 containerd[1661]: time="2026-01-14T01:08:07.896884509Z" level=info msg="StartContainer for \"24cfa55af8193bff424c25a7df3aab89eb368a9f21c17d79b0c9e377c5ee86b8\"" Jan 14 01:08:07.900312 containerd[1661]: time="2026-01-14T01:08:07.900083117Z" level=info msg="Container ddac1cadfa5a06df4cf668499f20e6482f9083503f846895623b869d1660602e: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:07.904109 containerd[1661]: time="2026-01-14T01:08:07.903193952Z" level=info msg="connecting to shim 24cfa55af8193bff424c25a7df3aab89eb368a9f21c17d79b0c9e377c5ee86b8" address="unix:///run/containerd/s/10ec31d28531ee3aca38bd41c0eaaed5bd9c10f8bf3c65e846469164a2012503" protocol=ttrpc version=3 Jan 14 01:08:07.919132 containerd[1661]: time="2026-01-14T01:08:07.918289960Z" level=info msg="Container 1b4f93ea20048af98999f00a541f17311445a2bebeba050769fb18867d914645: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:07.938322 containerd[1661]: time="2026-01-14T01:08:07.938152656Z" level=info msg="CreateContainer within sandbox \"c99ab36a200d1184f7e4dbb489baade21e8a1fd4fe986b7bd01481fe3a23ba50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ddac1cadfa5a06df4cf668499f20e6482f9083503f846895623b869d1660602e\"" Jan 14 01:08:07.943254 containerd[1661]: time="2026-01-14T01:08:07.942084107Z" level=info msg="StartContainer for \"ddac1cadfa5a06df4cf668499f20e6482f9083503f846895623b869d1660602e\"" Jan 14 01:08:07.944830 containerd[1661]: time="2026-01-14T01:08:07.944247967Z" level=info msg="connecting to shim ddac1cadfa5a06df4cf668499f20e6482f9083503f846895623b869d1660602e" address="unix:///run/containerd/s/eddfd64b9b5f104c6eec754c79e3df60ab419983723696204ccfffcfb6c56155" protocol=ttrpc version=3 Jan 14 01:08:07.958097 containerd[1661]: time="2026-01-14T01:08:07.957593497Z" level=info msg="CreateContainer within sandbox \"e610b9d7c56df041c0d096790af1c64749b2d56528f5fc99bd2c85787677508d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b4f93ea20048af98999f00a541f17311445a2bebeba050769fb18867d914645\"" Jan 14 01:08:07.971989 containerd[1661]: time="2026-01-14T01:08:07.971343699Z" level=info msg="StartContainer for \"1b4f93ea20048af98999f00a541f17311445a2bebeba050769fb18867d914645\"" Jan 14 01:08:07.978252 containerd[1661]: time="2026-01-14T01:08:07.978011729Z" level=info msg="connecting to shim 1b4f93ea20048af98999f00a541f17311445a2bebeba050769fb18867d914645" address="unix:///run/containerd/s/cbf1213653b5cc3fc44c7640c791c6dc092715475dcda4d038cdb5e0c19073b8" protocol=ttrpc version=3 Jan 14 01:08:08.020628 systemd[1]: Started cri-containerd-24cfa55af8193bff424c25a7df3aab89eb368a9f21c17d79b0c9e377c5ee86b8.scope - libcontainer container 24cfa55af8193bff424c25a7df3aab89eb368a9f21c17d79b0c9e377c5ee86b8. Jan 14 01:08:08.061309 systemd[1]: Started cri-containerd-ddac1cadfa5a06df4cf668499f20e6482f9083503f846895623b869d1660602e.scope - libcontainer container ddac1cadfa5a06df4cf668499f20e6482f9083503f846895623b869d1660602e. Jan 14 01:08:08.072033 systemd[1]: Started cri-containerd-1b4f93ea20048af98999f00a541f17311445a2bebeba050769fb18867d914645.scope - libcontainer container 1b4f93ea20048af98999f00a541f17311445a2bebeba050769fb18867d914645. Jan 14 01:08:08.117031 kubelet[2557]: I0114 01:08:08.115913 2557 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:08.117031 kubelet[2557]: E0114 01:08:08.116315 2557 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jan 14 01:08:08.132000 audit: BPF prog-id=101 op=LOAD Jan 14 01:08:08.136000 audit: BPF prog-id=102 op=LOAD Jan 14 01:08:08.136000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2628 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234636661353561663831393362666634323463323561376466336161 Jan 14 01:08:08.137000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:08:08.137000 audit[2740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234636661353561663831393362666634323463323561376466336161 Jan 14 01:08:08.138000 audit: BPF prog-id=103 op=LOAD Jan 14 01:08:08.138000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2628 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234636661353561663831393362666634323463323561376466336161 Jan 14 01:08:08.138000 audit: BPF prog-id=104 op=LOAD Jan 14 01:08:08.138000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2628 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234636661353561663831393362666634323463323561376466336161 Jan 14 01:08:08.138000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:08:08.138000 audit[2740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234636661353561663831393362666634323463323561376466336161 Jan 14 01:08:08.138000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:08:08.138000 audit[2740]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234636661353561663831393362666634323463323561376466336161 Jan 14 01:08:08.138000 audit: BPF prog-id=105 op=LOAD Jan 14 01:08:08.138000 audit[2740]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2628 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234636661353561663831393362666634323463323561376466336161 Jan 14 01:08:08.148000 audit: BPF prog-id=106 op=LOAD Jan 14 01:08:08.165000 audit: BPF prog-id=107 op=LOAD Jan 14 01:08:08.202570 kernel: kauditd_printk_skb: 160 callbacks suppressed Jan 14 01:08:08.202900 kernel: audit: type=1300 audit(1768352888.165:397): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.165000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.205534 kubelet[2557]: E0114 01:08:08.205373 2557 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:08:08.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.280598 kernel: audit: type=1327 audit(1768352888.165:397): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.282956 kernel: audit: type=1334 audit(1768352888.175:398): prog-id=107 op=UNLOAD Jan 14 01:08:08.175000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:08:08.175000 audit[2746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.344600 kernel: audit: type=1300 audit(1768352888.175:398): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.344947 kernel: audit: type=1327 audit(1768352888.175:398): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.175000 audit: BPF prog-id=108 op=LOAD Jan 14 01:08:08.448344 kernel: audit: type=1334 audit(1768352888.175:399): prog-id=108 op=LOAD Jan 14 01:08:08.448559 kernel: audit: type=1300 audit(1768352888.175:399): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.175000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.545943 containerd[1661]: time="2026-01-14T01:08:08.543949302Z" level=info msg="StartContainer for \"24cfa55af8193bff424c25a7df3aab89eb368a9f21c17d79b0c9e377c5ee86b8\" returns successfully" Jan 14 01:08:08.559995 kernel: audit: type=1327 audit(1768352888.175:399): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.572064 kernel: audit: type=1334 audit(1768352888.176:400): prog-id=109 op=LOAD Jan 14 01:08:08.176000 audit: BPF prog-id=109 op=LOAD Jan 14 01:08:08.176000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.623190 kernel: audit: type=1300 audit(1768352888.176:400): arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.176000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:08:08.176000 audit[2746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.176000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:08:08.176000 audit[2746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.176000 audit: BPF prog-id=110 op=LOAD Jan 14 01:08:08.176000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2626 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616331636164666135613036646634636636363834393966323065 Jan 14 01:08:08.209000 audit: BPF prog-id=111 op=LOAD Jan 14 01:08:08.215000 audit: BPF prog-id=112 op=LOAD Jan 14 01:08:08.215000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2624 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346639336561323030343861663938393939663030613534316631 Jan 14 01:08:08.215000 audit: BPF prog-id=112 op=UNLOAD Jan 14 01:08:08.215000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2624 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346639336561323030343861663938393939663030613534316631 Jan 14 01:08:08.217000 audit: BPF prog-id=113 op=LOAD Jan 14 01:08:08.217000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2624 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346639336561323030343861663938393939663030613534316631 Jan 14 01:08:08.217000 audit: BPF prog-id=114 op=LOAD Jan 14 01:08:08.217000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2624 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346639336561323030343861663938393939663030613534316631 Jan 14 01:08:08.217000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:08:08.217000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2624 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346639336561323030343861663938393939663030613534316631 Jan 14 01:08:08.217000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:08:08.217000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2624 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346639336561323030343861663938393939663030613534316631 Jan 14 01:08:08.217000 audit: BPF prog-id=115 op=LOAD Jan 14 01:08:08.217000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2624 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:08.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162346639336561323030343861663938393939663030613534316631 Jan 14 01:08:08.684263 containerd[1661]: time="2026-01-14T01:08:08.682996529Z" level=info msg="StartContainer for \"ddac1cadfa5a06df4cf668499f20e6482f9083503f846895623b869d1660602e\" returns successfully" Jan 14 01:08:08.684263 containerd[1661]: time="2026-01-14T01:08:08.684208085Z" level=info msg="StartContainer for \"1b4f93ea20048af98999f00a541f17311445a2bebeba050769fb18867d914645\" returns successfully" Jan 14 01:08:09.637336 kubelet[2557]: E0114 01:08:09.636913 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:09.637336 kubelet[2557]: E0114 01:08:09.637069 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:09.656563 kubelet[2557]: E0114 01:08:09.656184 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:09.656563 kubelet[2557]: E0114 01:08:09.656352 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:09.661030 kubelet[2557]: E0114 01:08:09.661012 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:09.661289 kubelet[2557]: E0114 01:08:09.661270 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:09.726022 kubelet[2557]: I0114 01:08:09.725219 2557 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:10.667925 kubelet[2557]: E0114 01:08:10.667300 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:10.673042 kubelet[2557]: E0114 01:08:10.670070 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:10.673042 kubelet[2557]: E0114 01:08:10.671050 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:10.673042 kubelet[2557]: E0114 01:08:10.671972 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:10.678995 kubelet[2557]: E0114 01:08:10.677128 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:10.678995 kubelet[2557]: E0114 01:08:10.677997 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:11.679036 kubelet[2557]: E0114 01:08:11.678298 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:11.679036 kubelet[2557]: E0114 01:08:11.678899 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:12.684130 kubelet[2557]: E0114 01:08:12.683024 2557 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:12.684130 kubelet[2557]: E0114 01:08:12.683325 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:13.114951 kubelet[2557]: E0114 01:08:13.114883 2557 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 01:08:13.175864 kubelet[2557]: I0114 01:08:13.175048 2557 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:08:13.175864 kubelet[2557]: E0114 01:08:13.175085 2557 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 14 01:08:13.208863 kubelet[2557]: I0114 01:08:13.208512 2557 apiserver.go:52] "Watching apiserver" Jan 14 01:08:13.259035 kubelet[2557]: I0114 01:08:13.258988 2557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:13.263074 kubelet[2557]: I0114 01:08:13.263024 2557 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:08:13.353384 kubelet[2557]: I0114 01:08:13.353005 2557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:13.506873 kubelet[2557]: E0114 01:08:13.506328 2557 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:13.506873 kubelet[2557]: I0114 01:08:13.506369 2557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:13.508117 kubelet[2557]: E0114 01:08:13.507212 2557 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:13.508876 kubelet[2557]: E0114 01:08:13.508362 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:13.527928 kubelet[2557]: E0114 01:08:13.527209 2557 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:13.527928 kubelet[2557]: I0114 01:08:13.527259 2557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:13.549364 kubelet[2557]: E0114 01:08:13.549089 2557 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:18.553946 kubelet[2557]: I0114 01:08:18.548978 2557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:18.885627 kubelet[2557]: E0114 01:08:18.877877 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:20.112163 kubelet[2557]: E0114 01:08:20.111512 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:21.290074 kubelet[2557]: I0114 01:08:21.289120 2557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:22.811957 kubelet[2557]: E0114 01:08:22.810054 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:23.685605 kubelet[2557]: I0114 01:08:23.680047 2557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:23.683183 systemd[1]: Reload requested from client PID 2858 ('systemctl') (unit session-8.scope)... Jan 14 01:08:23.683202 systemd[1]: Reloading... Jan 14 01:08:23.739592 kubelet[2557]: E0114 01:08:23.739540 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:23.806940 kubelet[2557]: E0114 01:08:23.805562 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:24.306061 kubelet[2557]: I0114 01:08:24.294399 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.294340146 podStartE2EDuration="6.294340146s" podCreationTimestamp="2026-01-14 01:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:08:24.058146403 +0000 UTC m=+19.222008474" watchObservedRunningTime="2026-01-14 01:08:24.294340146 +0000 UTC m=+19.458202217" Jan 14 01:08:24.463912 zram_generator::config[2901]: No configuration found. Jan 14 01:08:24.763846 kubelet[2557]: E0114 01:08:24.763018 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:24.810012 kubelet[2557]: I0114 01:08:24.802595 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.802357124 podStartE2EDuration="1.802357124s" podCreationTimestamp="2026-01-14 01:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:08:24.453090612 +0000 UTC m=+19.616952683" watchObservedRunningTime="2026-01-14 01:08:24.802357124 +0000 UTC m=+19.966219195" Jan 14 01:08:25.802161 kubelet[2557]: E0114 01:08:25.799257 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:26.289903 systemd[1]: Reloading finished in 2605 ms. Jan 14 01:08:26.407895 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:08:26.454017 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:08:26.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:26.456630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:26.456982 systemd[1]: kubelet.service: Consumed 6.784s CPU time, 130.7M memory peak. Jan 14 01:08:26.471235 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:08:26.474081 kernel: audit: type=1131 audit(1768352906.455:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:26.469027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:08:26.477000 audit: BPF prog-id=116 op=LOAD Jan 14 01:08:26.513154 kernel: audit: type=1334 audit(1768352906.477:413): prog-id=116 op=LOAD Jan 14 01:08:26.477000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:08:26.479000 audit: BPF prog-id=117 op=LOAD Jan 14 01:08:26.479000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:08:26.565015 kernel: audit: type=1334 audit(1768352906.477:414): prog-id=85 op=UNLOAD Jan 14 01:08:26.565355 kernel: audit: type=1334 audit(1768352906.479:415): prog-id=117 op=LOAD Jan 14 01:08:26.565413 kernel: audit: type=1334 audit(1768352906.479:416): prog-id=79 op=UNLOAD Jan 14 01:08:26.565592 kernel: audit: type=1334 audit(1768352906.479:417): prog-id=118 op=LOAD Jan 14 01:08:26.479000 audit: BPF prog-id=118 op=LOAD Jan 14 01:08:26.480000 audit: BPF prog-id=119 op=LOAD Jan 14 01:08:26.606126 kernel: audit: type=1334 audit(1768352906.480:418): prog-id=119 op=LOAD Jan 14 01:08:26.607164 kernel: audit: type=1334 audit(1768352906.480:419): prog-id=80 op=UNLOAD Jan 14 01:08:26.480000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:08:26.620260 kernel: audit: type=1334 audit(1768352906.480:420): prog-id=81 op=UNLOAD Jan 14 01:08:26.480000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:08:26.481000 audit: BPF prog-id=120 op=LOAD Jan 14 01:08:26.651229 kernel: audit: type=1334 audit(1768352906.481:421): prog-id=120 op=LOAD Jan 14 01:08:26.481000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:08:26.485000 audit: BPF prog-id=121 op=LOAD Jan 14 01:08:26.485000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:08:26.486000 audit: BPF prog-id=122 op=LOAD Jan 14 01:08:26.486000 audit: BPF prog-id=123 op=LOAD Jan 14 01:08:26.486000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:08:26.486000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:08:26.490000 audit: BPF prog-id=124 op=LOAD Jan 14 01:08:26.491000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:08:26.492000 audit: BPF prog-id=125 op=LOAD Jan 14 01:08:26.492000 audit: BPF prog-id=126 op=LOAD Jan 14 01:08:26.492000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:08:26.492000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:08:26.494000 audit: BPF prog-id=127 op=LOAD Jan 14 01:08:26.494000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:08:26.494000 audit: BPF prog-id=128 op=LOAD Jan 14 01:08:26.494000 audit: BPF prog-id=129 op=LOAD Jan 14 01:08:26.494000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:08:26.494000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:08:26.503000 audit: BPF prog-id=130 op=LOAD Jan 14 01:08:26.503000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:08:26.510000 audit: BPF prog-id=131 op=LOAD Jan 14 01:08:26.510000 audit: BPF prog-id=132 op=LOAD Jan 14 01:08:26.510000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:08:26.510000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:08:26.514000 audit: BPF prog-id=133 op=LOAD Jan 14 01:08:26.514000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:08:26.540000 audit: BPF prog-id=134 op=LOAD Jan 14 01:08:26.541000 audit: BPF prog-id=135 op=LOAD Jan 14 01:08:26.541000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:08:26.541000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:08:27.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:27.238291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:27.269287 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:08:27.532318 kubelet[2949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:27.532318 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:08:27.532318 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:27.532318 kubelet[2949]: I0114 01:08:27.531312 2949 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:08:27.591301 kubelet[2949]: I0114 01:08:27.590922 2949 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:08:27.591301 kubelet[2949]: I0114 01:08:27.591067 2949 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:08:27.595718 kubelet[2949]: I0114 01:08:27.593098 2949 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:08:27.601539 kubelet[2949]: I0114 01:08:27.597907 2949 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:08:27.617927 kubelet[2949]: I0114 01:08:27.617110 2949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:08:27.652911 kubelet[2949]: I0114 01:08:27.652290 2949 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:08:27.738862 kubelet[2949]: I0114 01:08:27.738415 2949 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:08:27.741603 kubelet[2949]: I0114 01:08:27.740613 2949 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:08:27.741603 kubelet[2949]: I0114 01:08:27.740997 2949 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:08:27.741603 kubelet[2949]: I0114 01:08:27.741220 2949 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:08:27.741603 kubelet[2949]: I0114 01:08:27.741234 2949 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:08:27.742575 kubelet[2949]: I0114 01:08:27.742306 2949 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:27.746552 kubelet[2949]: I0114 01:08:27.745159 2949 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:08:27.746552 kubelet[2949]: I0114 01:08:27.745186 2949 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:08:27.746552 kubelet[2949]: I0114 01:08:27.745220 2949 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:08:27.746552 kubelet[2949]: I0114 01:08:27.745347 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:08:27.751208 kubelet[2949]: I0114 01:08:27.751151 2949 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:08:27.752310 kubelet[2949]: I0114 01:08:27.752280 2949 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:08:27.828361 kubelet[2949]: I0114 01:08:27.827408 2949 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:08:27.828361 kubelet[2949]: I0114 01:08:27.828131 2949 server.go:1289] "Started kubelet" Jan 14 01:08:27.841253 kubelet[2949]: I0114 01:08:27.832231 2949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:08:27.841394 kubelet[2949]: I0114 01:08:27.841359 2949 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:08:27.849308 kubelet[2949]: I0114 01:08:27.847041 2949 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:08:27.858042 kubelet[2949]: I0114 01:08:27.858009 2949 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:08:27.879254 kubelet[2949]: I0114 01:08:27.879214 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:08:27.886095 kubelet[2949]: I0114 01:08:27.885267 2949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:08:27.891029 kubelet[2949]: I0114 01:08:27.891008 2949 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:08:27.892903 kubelet[2949]: I0114 01:08:27.892885 2949 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:08:27.893382 kubelet[2949]: I0114 01:08:27.893360 2949 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:08:27.894148 kubelet[2949]: E0114 01:08:27.894119 2949 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:08:27.898998 kubelet[2949]: I0114 01:08:27.898974 2949 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:08:27.899227 kubelet[2949]: I0114 01:08:27.899194 2949 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:08:27.904143 kubelet[2949]: I0114 01:08:27.904115 2949 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:08:28.182391 kubelet[2949]: I0114 01:08:28.173908 2949 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:08:28.210878 kubelet[2949]: I0114 01:08:28.209583 2949 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:08:28.217128 kubelet[2949]: I0114 01:08:28.216880 2949 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:08:28.217128 kubelet[2949]: I0114 01:08:28.217012 2949 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:08:28.217128 kubelet[2949]: I0114 01:08:28.217025 2949 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:08:28.217128 kubelet[2949]: E0114 01:08:28.217089 2949 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307149 2949 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307178 2949 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307210 2949 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307570 2949 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307588 2949 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307611 2949 policy_none.go:49] "None policy: Start" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307623 2949 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.307870 2949 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:08:28.310073 kubelet[2949]: I0114 01:08:28.308231 2949 state_mem.go:75] "Updated machine memory state" Jan 14 01:08:28.320949 kubelet[2949]: E0114 01:08:28.318929 2949 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:08:28.341016 kubelet[2949]: E0114 01:08:28.340367 2949 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:08:28.355565 kubelet[2949]: I0114 01:08:28.354920 2949 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:08:28.355565 kubelet[2949]: I0114 01:08:28.355078 2949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:08:28.355929 kubelet[2949]: I0114 01:08:28.355599 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:08:28.364412 kubelet[2949]: E0114 01:08:28.364257 2949 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:08:28.515013 kubelet[2949]: I0114 01:08:28.510307 2949 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:28.537869 kubelet[2949]: I0114 01:08:28.534240 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:28.544924 kubelet[2949]: I0114 01:08:28.536088 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:28.549211 kubelet[2949]: I0114 01:08:28.549184 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:28.586282 kubelet[2949]: E0114 01:08:28.586244 2949 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:28.587552 kubelet[2949]: E0114 01:08:28.587154 2949 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:28.587895 kubelet[2949]: E0114 01:08:28.587875 2949 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:28.599962 kubelet[2949]: I0114 01:08:28.599942 2949 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 01:08:28.600130 kubelet[2949]: I0114 01:08:28.600113 2949 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:08:28.603997 kubelet[2949]: I0114 01:08:28.603976 2949 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:08:28.612351 containerd[1661]: time="2026-01-14T01:08:28.612238206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:08:28.617937 kubelet[2949]: I0114 01:08:28.614601 2949 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:08:28.624598 kubelet[2949]: I0114 01:08:28.624557 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1dc4ae38394d7c6112a3d8945eb0f83-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1dc4ae38394d7c6112a3d8945eb0f83\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:28.624942 kubelet[2949]: I0114 01:08:28.624919 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:28.625281 kubelet[2949]: I0114 01:08:28.625044 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:28.625281 kubelet[2949]: I0114 01:08:28.625077 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:28.625281 kubelet[2949]: I0114 01:08:28.625102 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:28.625281 kubelet[2949]: I0114 01:08:28.625149 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1dc4ae38394d7c6112a3d8945eb0f83-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1dc4ae38394d7c6112a3d8945eb0f83\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:28.625281 kubelet[2949]: I0114 01:08:28.625171 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:28.625577 kubelet[2949]: I0114 01:08:28.625199 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:28.625577 kubelet[2949]: I0114 01:08:28.625222 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1dc4ae38394d7c6112a3d8945eb0f83-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1dc4ae38394d7c6112a3d8945eb0f83\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:28.748933 kubelet[2949]: I0114 01:08:28.748899 2949 apiserver.go:52] "Watching apiserver" Jan 14 01:08:28.797974 kubelet[2949]: I0114 01:08:28.795046 2949 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:08:28.894902 kubelet[2949]: E0114 01:08:28.894587 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:28.902604 kubelet[2949]: E0114 01:08:28.897225 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:28.902604 kubelet[2949]: E0114 01:08:28.897338 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:29.370055 kubelet[2949]: E0114 01:08:29.368141 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:29.370055 kubelet[2949]: E0114 01:08:29.369557 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:29.370914 kubelet[2949]: E0114 01:08:29.370891 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:29.448130 kubelet[2949]: I0114 01:08:29.448086 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9b43a8a-4b47-4271-a525-256be20f0afa-xtables-lock\") pod \"kube-proxy-75jxh\" (UID: \"c9b43a8a-4b47-4271-a525-256be20f0afa\") " pod="kube-system/kube-proxy-75jxh" Jan 14 01:08:29.448318 kubelet[2949]: I0114 01:08:29.448295 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9b43a8a-4b47-4271-a525-256be20f0afa-lib-modules\") pod \"kube-proxy-75jxh\" (UID: \"c9b43a8a-4b47-4271-a525-256be20f0afa\") " pod="kube-system/kube-proxy-75jxh" Jan 14 01:08:29.448609 kubelet[2949]: I0114 01:08:29.448580 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9b43a8a-4b47-4271-a525-256be20f0afa-kube-proxy\") pod \"kube-proxy-75jxh\" (UID: \"c9b43a8a-4b47-4271-a525-256be20f0afa\") " pod="kube-system/kube-proxy-75jxh" Jan 14 01:08:29.448954 kubelet[2949]: I0114 01:08:29.448926 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4bk\" (UniqueName: \"kubernetes.io/projected/c9b43a8a-4b47-4271-a525-256be20f0afa-kube-api-access-5f4bk\") pod \"kube-proxy-75jxh\" (UID: \"c9b43a8a-4b47-4271-a525-256be20f0afa\") " pod="kube-system/kube-proxy-75jxh" Jan 14 01:08:29.505947 systemd[1]: Created slice kubepods-besteffort-podc9b43a8a_4b47_4271_a525_256be20f0afa.slice - libcontainer container kubepods-besteffort-podc9b43a8a_4b47_4271_a525_256be20f0afa.slice. Jan 14 01:08:29.828136 kubelet[2949]: E0114 01:08:29.828010 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:29.835299 containerd[1661]: time="2026-01-14T01:08:29.830912256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75jxh,Uid:c9b43a8a-4b47-4271-a525-256be20f0afa,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:30.022868 containerd[1661]: time="2026-01-14T01:08:30.021378140Z" level=info msg="connecting to shim 9302fdbb33ffcb3781694e8eb252fd9f8ca683e828ff9ee0a9ada0b910624787" address="unix:///run/containerd/s/59e1f7acfe114c9753c82588926524e12580964b61c8ac77f700108754dc8e6d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:30.277950 kubelet[2949]: I0114 01:08:30.275916 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93f5d8a1-2b39-465a-b511-920a1f00fa64-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5dbqg\" (UID: \"93f5d8a1-2b39-465a-b511-920a1f00fa64\") " pod="tigera-operator/tigera-operator-7dcd859c48-5dbqg" Jan 14 01:08:30.277950 kubelet[2949]: I0114 01:08:30.275968 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zrj\" (UniqueName: \"kubernetes.io/projected/93f5d8a1-2b39-465a-b511-920a1f00fa64-kube-api-access-z4zrj\") pod \"tigera-operator-7dcd859c48-5dbqg\" (UID: \"93f5d8a1-2b39-465a-b511-920a1f00fa64\") " pod="tigera-operator/tigera-operator-7dcd859c48-5dbqg" Jan 14 01:08:30.317179 systemd[1]: Created slice kubepods-besteffort-pod93f5d8a1_2b39_465a_b511_920a1f00fa64.slice - libcontainer container kubepods-besteffort-pod93f5d8a1_2b39_465a_b511_920a1f00fa64.slice. Jan 14 01:08:30.369618 systemd[1]: Started cri-containerd-9302fdbb33ffcb3781694e8eb252fd9f8ca683e828ff9ee0a9ada0b910624787.scope - libcontainer container 9302fdbb33ffcb3781694e8eb252fd9f8ca683e828ff9ee0a9ada0b910624787. Jan 14 01:08:30.371196 kubelet[2949]: E0114 01:08:30.371030 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:30.380227 kubelet[2949]: E0114 01:08:30.379897 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:30.498000 audit: BPF prog-id=136 op=LOAD Jan 14 01:08:30.502000 audit: BPF prog-id=137 op=LOAD Jan 14 01:08:30.502000 audit[3026]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3009 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:30.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303266646262333366666362333738313639346538656232353266 Jan 14 01:08:30.502000 audit: BPF prog-id=137 op=UNLOAD Jan 14 01:08:30.502000 audit[3026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:30.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303266646262333366666362333738313639346538656232353266 Jan 14 01:08:30.503000 audit: BPF prog-id=138 op=LOAD Jan 14 01:08:30.503000 audit[3026]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3009 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:30.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303266646262333366666362333738313639346538656232353266 Jan 14 01:08:30.503000 audit: BPF prog-id=139 op=LOAD Jan 14 01:08:30.503000 audit[3026]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3009 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:30.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303266646262333366666362333738313639346538656232353266 Jan 14 01:08:30.503000 audit: BPF prog-id=139 op=UNLOAD Jan 14 01:08:30.503000 audit[3026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:30.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303266646262333366666362333738313639346538656232353266 Jan 14 01:08:30.503000 audit: BPF prog-id=138 op=UNLOAD Jan 14 01:08:30.503000 audit[3026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:30.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303266646262333366666362333738313639346538656232353266 Jan 14 01:08:30.503000 audit: BPF prog-id=140 op=LOAD Jan 14 01:08:30.503000 audit[3026]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3009 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:30.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303266646262333366666362333738313639346538656232353266 Jan 14 01:08:30.654091 containerd[1661]: time="2026-01-14T01:08:30.650828124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5dbqg,Uid:93f5d8a1-2b39-465a-b511-920a1f00fa64,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:08:30.666723 containerd[1661]: time="2026-01-14T01:08:30.666259576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75jxh,Uid:c9b43a8a-4b47-4271-a525-256be20f0afa,Namespace:kube-system,Attempt:0,} returns sandbox id \"9302fdbb33ffcb3781694e8eb252fd9f8ca683e828ff9ee0a9ada0b910624787\"" Jan 14 01:08:30.675409 kubelet[2949]: E0114 01:08:30.675229 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:30.788415 containerd[1661]: time="2026-01-14T01:08:30.752403355Z" level=info msg="CreateContainer within sandbox \"9302fdbb33ffcb3781694e8eb252fd9f8ca683e828ff9ee0a9ada0b910624787\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:08:30.974206 containerd[1661]: time="2026-01-14T01:08:30.973351904Z" level=info msg="connecting to shim e0c63ed7c0bffb3c73da7ad0bc4ec6ec2a28380270179132ef00a2330783eb8b" address="unix:///run/containerd/s/f1738f9d61283b30d96fa288f2494af417555283b62ce823f548d0b100f6e7c6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:30.986625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882672010.mount: Deactivated successfully. Jan 14 01:08:31.002862 containerd[1661]: time="2026-01-14T01:08:31.001992065Z" level=info msg="Container faa23f084ca181a7ba94fd18d7f6c378c22cec5c7b6ab35c1bdf50bbd40f1c93: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:31.010358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739032357.mount: Deactivated successfully. Jan 14 01:08:31.074878 containerd[1661]: time="2026-01-14T01:08:31.073562831Z" level=info msg="CreateContainer within sandbox \"9302fdbb33ffcb3781694e8eb252fd9f8ca683e828ff9ee0a9ada0b910624787\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"faa23f084ca181a7ba94fd18d7f6c378c22cec5c7b6ab35c1bdf50bbd40f1c93\"" Jan 14 01:08:31.088068 containerd[1661]: time="2026-01-14T01:08:31.087176018Z" level=info msg="StartContainer for \"faa23f084ca181a7ba94fd18d7f6c378c22cec5c7b6ab35c1bdf50bbd40f1c93\"" Jan 14 01:08:31.109899 containerd[1661]: time="2026-01-14T01:08:31.109059377Z" level=info msg="connecting to shim faa23f084ca181a7ba94fd18d7f6c378c22cec5c7b6ab35c1bdf50bbd40f1c93" address="unix:///run/containerd/s/59e1f7acfe114c9753c82588926524e12580964b61c8ac77f700108754dc8e6d" protocol=ttrpc version=3 Jan 14 01:08:31.248048 systemd[1]: Started cri-containerd-e0c63ed7c0bffb3c73da7ad0bc4ec6ec2a28380270179132ef00a2330783eb8b.scope - libcontainer container e0c63ed7c0bffb3c73da7ad0bc4ec6ec2a28380270179132ef00a2330783eb8b. Jan 14 01:08:31.266979 systemd[1]: Started cri-containerd-faa23f084ca181a7ba94fd18d7f6c378c22cec5c7b6ab35c1bdf50bbd40f1c93.scope - libcontainer container faa23f084ca181a7ba94fd18d7f6c378c22cec5c7b6ab35c1bdf50bbd40f1c93. Jan 14 01:08:31.496247 kernel: kauditd_printk_skb: 54 callbacks suppressed Jan 14 01:08:31.496389 kernel: audit: type=1334 audit(1768352911.475:462): prog-id=141 op=LOAD Jan 14 01:08:31.475000 audit: BPF prog-id=141 op=LOAD Jan 14 01:08:31.507000 audit: BPF prog-id=142 op=LOAD Jan 14 01:08:31.524600 kernel: audit: type=1334 audit(1768352911.507:463): prog-id=142 op=LOAD Jan 14 01:08:31.507000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.652423 kernel: audit: type=1300 audit(1768352911.507:463): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.653601 kernel: audit: type=1327 audit(1768352911.507:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.654350 kernel: audit: type=1334 audit(1768352911.507:464): prog-id=142 op=UNLOAD Jan 14 01:08:31.507000 audit: BPF prog-id=142 op=UNLOAD Jan 14 01:08:31.666362 kernel: audit: type=1300 audit(1768352911.507:464): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.507000 audit[3075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.821139 kernel: audit: type=1327 audit(1768352911.507:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.822115 kernel: audit: type=1334 audit(1768352911.507:465): prog-id=143 op=LOAD Jan 14 01:08:31.507000 audit: BPF prog-id=143 op=LOAD Jan 14 01:08:31.839784 kernel: audit: type=1300 audit(1768352911.507:465): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.507000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.942175 kernel: audit: type=1327 audit(1768352911.507:465): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.507000 audit: BPF prog-id=144 op=LOAD Jan 14 01:08:31.507000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.508000 audit: BPF prog-id=144 op=UNLOAD Jan 14 01:08:31.508000 audit[3075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.508000 audit: BPF prog-id=143 op=UNLOAD Jan 14 01:08:31.508000 audit[3075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.508000 audit: BPF prog-id=145 op=LOAD Jan 14 01:08:31.508000 audit[3075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3064 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633633656437633062666662336337336461376164306263346563 Jan 14 01:08:31.820000 audit: BPF prog-id=146 op=LOAD Jan 14 01:08:31.820000 audit[3083]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=3009 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613233663038346361313831613762613934666431386437663663 Jan 14 01:08:31.820000 audit: BPF prog-id=147 op=LOAD Jan 14 01:08:31.820000 audit[3083]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=3009 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613233663038346361313831613762613934666431386437663663 Jan 14 01:08:31.820000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:08:31.820000 audit[3083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613233663038346361313831613762613934666431386437663663 Jan 14 01:08:31.820000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:08:31.820000 audit[3083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613233663038346361313831613762613934666431386437663663 Jan 14 01:08:31.820000 audit: BPF prog-id=148 op=LOAD Jan 14 01:08:31.820000 audit[3083]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=3009 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613233663038346361313831613762613934666431386437663663 Jan 14 01:08:31.987218 containerd[1661]: time="2026-01-14T01:08:31.985921273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5dbqg,Uid:93f5d8a1-2b39-465a-b511-920a1f00fa64,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e0c63ed7c0bffb3c73da7ad0bc4ec6ec2a28380270179132ef00a2330783eb8b\"" Jan 14 01:08:32.003176 containerd[1661]: time="2026-01-14T01:08:32.003131360Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:08:32.135002 containerd[1661]: time="2026-01-14T01:08:32.128073161Z" level=info msg="StartContainer for \"faa23f084ca181a7ba94fd18d7f6c378c22cec5c7b6ab35c1bdf50bbd40f1c93\" returns successfully" Jan 14 01:08:32.427128 kubelet[2949]: E0114 01:08:32.424577 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:32.472881 kubelet[2949]: E0114 01:08:32.472056 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:32.485422 kubelet[2949]: E0114 01:08:32.484062 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:33.686379 kubelet[2949]: E0114 01:08:33.673294 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:33.977902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343890439.mount: Deactivated successfully. Jan 14 01:08:34.432000 audit[3169]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:34.432000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc59ac8ba0 a2=0 a3=7ffc59ac8b8c items=0 ppid=3100 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:08:34.464000 audit[3173]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:34.464000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7b50d110 a2=0 a3=7ffe7b50d0fc items=0 ppid=3100 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:08:34.481000 audit[3175]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:34.481000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2c495060 a2=0 a3=7ffc2c49504c items=0 ppid=3100 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:08:34.503000 audit[3170]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.503000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf7220360 a2=0 a3=7ffdf722034c items=0 ppid=3100 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:08:34.513619 kubelet[2949]: E0114 01:08:34.513098 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:34.565000 audit[3176]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.565000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6992f4a0 a2=0 a3=7ffe6992f48c items=0 ppid=3100 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:08:34.580000 audit[3177]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.580000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea3e308a0 a2=0 a3=7ffea3e3088c items=0 ppid=3100 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:08:34.605100 kubelet[2949]: I0114 01:08:34.605037 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75jxh" podStartSLOduration=5.605017548 podStartE2EDuration="5.605017548s" podCreationTimestamp="2026-01-14 01:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:08:32.657301756 +0000 UTC m=+5.315749853" watchObservedRunningTime="2026-01-14 01:08:34.605017548 +0000 UTC m=+7.263465635" Jan 14 01:08:34.618389 kubelet[2949]: E0114 01:08:34.617265 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:34.617000 audit[3178]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.617000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc0de9d840 a2=0 a3=7ffc0de9d82c items=0 ppid=3100 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:08:34.683961 kubelet[2949]: E0114 01:08:34.683607 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:34.704000 audit[3180]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.704000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe49a874d0 a2=0 a3=7ffe49a874bc items=0 ppid=3100 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:08:34.844000 audit[3183]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.844000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff895985e0 a2=0 a3=7fff895985cc items=0 ppid=3100 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:08:34.864000 audit[3184]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.864000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd36d97fb0 a2=0 a3=7ffd36d97f9c items=0 ppid=3100 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:08:34.974000 audit[3186]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.974000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd2dc8310 a2=0 a3=7ffcd2dc82fc items=0 ppid=3100 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:08:34.992000 audit[3187]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:34.992000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd62816880 a2=0 a3=7ffd6281686c items=0 ppid=3100 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:34.992000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:08:35.053000 audit[3189]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.053000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff6e6eb500 a2=0 a3=7fff6e6eb4ec items=0 ppid=3100 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:08:35.107000 audit[3192]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.107000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe5af0ffd0 a2=0 a3=7ffe5af0ffbc items=0 ppid=3100 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:08:35.143000 audit[3193]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.143000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe42276b60 a2=0 a3=7ffe42276b4c items=0 ppid=3100 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:08:35.175000 audit[3195]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.175000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9a2b8ae0 a2=0 a3=7ffe9a2b8acc items=0 ppid=3100 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:08:35.205000 audit[3196]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.205000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff77a7a770 a2=0 a3=7fff77a7a75c items=0 ppid=3100 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:08:35.245000 audit[3198]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.245000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe90043e10 a2=0 a3=7ffe90043dfc items=0 ppid=3100 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:08:35.290000 audit[3201]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.290000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda712c860 a2=0 a3=7ffda712c84c items=0 ppid=3100 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:08:35.342000 audit[3204]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.342000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff853adcd0 a2=0 a3=7fff853adcbc items=0 ppid=3100 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:08:35.352000 audit[3205]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.352000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2a411e00 a2=0 a3=7ffe2a411dec items=0 ppid=3100 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:08:35.376000 audit[3207]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.376000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff98d43640 a2=0 a3=7fff98d4362c items=0 ppid=3100 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:35.431000 audit[3210]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.431000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4b436af0 a2=0 a3=7ffd4b436adc items=0 ppid=3100 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:35.445000 audit[3211]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.445000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd15833770 a2=0 a3=7ffd1583375c items=0 ppid=3100 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:08:35.552000 audit[3213]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:35.552000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff7c4eee60 a2=0 a3=7fff7c4eee4c items=0 ppid=3100 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:35.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:08:35.641177 kubelet[2949]: E0114 01:08:35.639604 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:36.053000 audit[3223]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.053000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5b8837d0 a2=0 a3=7fff5b8837bc items=0 ppid=3100 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.089000 audit[3223]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.089000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff5b8837d0 a2=0 a3=7fff5b8837bc items=0 ppid=3100 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.123000 audit[3228]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.123000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffddde50e20 a2=0 a3=7ffddde50e0c items=0 ppid=3100 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:08:36.187000 audit[3230]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.187000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffed74a0720 a2=0 a3=7ffed74a070c items=0 ppid=3100 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:08:36.273000 audit[3233]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.273000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff8b6d17c0 a2=0 a3=7fff8b6d17ac items=0 ppid=3100 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:08:36.299000 audit[3234]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.299000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9bdb1da0 a2=0 a3=7fff9bdb1d8c items=0 ppid=3100 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:08:36.340000 audit[3236]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.340000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec388bb70 a2=0 a3=7ffec388bb5c items=0 ppid=3100 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:08:36.363000 audit[3237]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.363000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0c0dac80 a2=0 a3=7ffc0c0dac6c items=0 ppid=3100 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:08:36.418000 audit[3239]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.418000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb9b3a370 a2=0 a3=7ffcb9b3a35c items=0 ppid=3100 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:08:36.490000 audit[3242]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.537143 kernel: kauditd_printk_skb: 129 callbacks suppressed Jan 14 01:08:36.537275 kernel: audit: type=1325 audit(1768352916.490:509): table=filter:88 family=10 entries=2 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.537345 kernel: audit: type=1300 audit(1768352916.490:509): arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd05e9fbc0 a2=0 a3=7ffd05e9fbac items=0 ppid=3100 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.490000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd05e9fbc0 a2=0 a3=7ffd05e9fbac items=0 ppid=3100 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:08:36.656432 kernel: audit: type=1327 audit(1768352916.490:509): proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:08:36.656930 kernel: audit: type=1325 audit(1768352916.562:510): table=filter:89 family=10 entries=1 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.562000 audit[3243]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.690910 kernel: audit: type=1300 audit(1768352916.562:510): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc96ca6250 a2=0 a3=7ffc96ca623c items=0 ppid=3100 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.562000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc96ca6250 a2=0 a3=7ffc96ca623c items=0 ppid=3100 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:08:36.614000 audit[3245]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.815855 kernel: audit: type=1327 audit(1768352916.562:510): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:08:36.815946 kernel: audit: type=1325 audit(1768352916.614:511): table=filter:90 family=10 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.815988 kernel: audit: type=1300 audit(1768352916.614:511): arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb71414a0 a2=0 a3=7ffdb714148c items=0 ppid=3100 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.614000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb71414a0 a2=0 a3=7ffdb714148c items=0 ppid=3100 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.875160 kernel: audit: type=1327 audit(1768352916.614:511): proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:08:36.614000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:08:36.659000 audit[3246]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.659000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc20c83690 a2=0 a3=7ffc20c8367c items=0 ppid=3100 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:08:36.704000 audit[3248]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.704000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7430a160 a2=0 a3=7ffd7430a14c items=0 ppid=3100 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.704000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:08:36.766000 audit[3251]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.960601 kernel: audit: type=1325 audit(1768352916.659:512): table=filter:91 family=10 entries=1 op=nft_register_chain pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.766000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff426e9ad0 a2=0 a3=7fff426e9abc items=0 ppid=3100 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:08:36.867000 audit[3254]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.867000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3d0d5ff0 a2=0 a3=7fff3d0d5fdc items=0 ppid=3100 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:08:36.884000 audit[3255]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.884000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff56fe30e0 a2=0 a3=7fff56fe30cc items=0 ppid=3100 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:08:36.974000 audit[3257]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:36.974000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc2debb4a0 a2=0 a3=7ffc2debb48c items=0 ppid=3100 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:37.061000 audit[3260]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:37.061000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe5cc9d20 a2=0 a3=7fffe5cc9d0c items=0 ppid=3100 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:37.089000 audit[3261]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:37.089000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffad661460 a2=0 a3=7fffad66144c items=0 ppid=3100 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:08:37.133000 audit[3263]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:37.133000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc14a833c0 a2=0 a3=7ffc14a833ac items=0 ppid=3100 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:08:37.157000 audit[3264]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:37.157000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb860e180 a2=0 a3=7ffeb860e16c items=0 ppid=3100 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:08:37.230000 audit[3266]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:37.230000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff56d9e5c0 a2=0 a3=7fff56d9e5ac items=0 ppid=3100 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:37.308000 audit[3269]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:37.308000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4608a430 a2=0 a3=7ffe4608a41c items=0 ppid=3100 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:37.387000 audit[3271]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:08:37.387000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc0d7e1100 a2=0 a3=7ffc0d7e10ec items=0 ppid=3100 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.387000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:37.392000 audit[3271]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:08:37.392000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc0d7e1100 a2=0 a3=7ffc0d7e10ec items=0 ppid=3100 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.392000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:43.602619 containerd[1661]: time="2026-01-14T01:08:43.599081374Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:43.614846 containerd[1661]: time="2026-01-14T01:08:43.614009767Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:08:43.644095 containerd[1661]: time="2026-01-14T01:08:43.643394434Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:43.662599 containerd[1661]: time="2026-01-14T01:08:43.653913493Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:43.662599 containerd[1661]: time="2026-01-14T01:08:43.659132137Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 11.65570755s" Jan 14 01:08:43.662599 containerd[1661]: time="2026-01-14T01:08:43.659163004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:08:43.696043 containerd[1661]: time="2026-01-14T01:08:43.694929944Z" level=info msg="CreateContainer within sandbox \"e0c63ed7c0bffb3c73da7ad0bc4ec6ec2a28380270179132ef00a2330783eb8b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:08:43.798867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023055397.mount: Deactivated successfully. Jan 14 01:08:43.819303 containerd[1661]: time="2026-01-14T01:08:43.819269032Z" level=info msg="Container 09a50e4d4d9022eba7a02588cf36e3162bccc50e73d542dc4b1533b01ea19921: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:43.873965 containerd[1661]: time="2026-01-14T01:08:43.873403697Z" level=info msg="CreateContainer within sandbox \"e0c63ed7c0bffb3c73da7ad0bc4ec6ec2a28380270179132ef00a2330783eb8b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"09a50e4d4d9022eba7a02588cf36e3162bccc50e73d542dc4b1533b01ea19921\"" Jan 14 01:08:43.888270 containerd[1661]: time="2026-01-14T01:08:43.883273074Z" level=info msg="StartContainer for \"09a50e4d4d9022eba7a02588cf36e3162bccc50e73d542dc4b1533b01ea19921\"" Jan 14 01:08:43.893879 containerd[1661]: time="2026-01-14T01:08:43.892941357Z" level=info msg="connecting to shim 09a50e4d4d9022eba7a02588cf36e3162bccc50e73d542dc4b1533b01ea19921" address="unix:///run/containerd/s/f1738f9d61283b30d96fa288f2494af417555283b62ce823f548d0b100f6e7c6" protocol=ttrpc version=3 Jan 14 01:08:44.226333 systemd[1]: Started cri-containerd-09a50e4d4d9022eba7a02588cf36e3162bccc50e73d542dc4b1533b01ea19921.scope - libcontainer container 09a50e4d4d9022eba7a02588cf36e3162bccc50e73d542dc4b1533b01ea19921. Jan 14 01:08:44.312000 audit: BPF prog-id=149 op=LOAD Jan 14 01:08:44.331257 kernel: kauditd_printk_skb: 41 callbacks suppressed Jan 14 01:08:44.331356 kernel: audit: type=1334 audit(1768352924.312:526): prog-id=149 op=LOAD Jan 14 01:08:44.321000 audit: BPF prog-id=150 op=LOAD Jan 14 01:08:44.354839 kernel: audit: type=1334 audit(1768352924.321:527): prog-id=150 op=LOAD Jan 14 01:08:44.321000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000198238 a2=98 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.407859 kernel: audit: type=1300 audit(1768352924.321:527): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000198238 a2=98 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.407999 kernel: audit: type=1327 audit(1768352924.321:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.321000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:08:44.473611 kernel: audit: type=1334 audit(1768352924.321:528): prog-id=150 op=UNLOAD Jan 14 01:08:44.321000 audit[3272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.564231 kernel: audit: type=1300 audit(1768352924.321:528): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.656937 kernel: audit: type=1327 audit(1768352924.321:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.657054 kernel: audit: type=1334 audit(1768352924.321:529): prog-id=151 op=LOAD Jan 14 01:08:44.321000 audit: BPF prog-id=151 op=LOAD Jan 14 01:08:44.709305 kernel: audit: type=1300 audit(1768352924.321:529): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000198488 a2=98 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.321000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000198488 a2=98 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.817885 kernel: audit: type=1327 audit(1768352924.321:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.321000 audit: BPF prog-id=152 op=LOAD Jan 14 01:08:44.321000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000198218 a2=98 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.321000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:08:44.321000 audit[3272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.321000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:08:44.321000 audit[3272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:44.321000 audit: BPF prog-id=153 op=LOAD Jan 14 01:08:44.321000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001986e8 a2=98 a3=0 items=0 ppid=3064 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613530653464346439303232656261376130323538386366333665 Jan 14 01:08:45.008921 containerd[1661]: time="2026-01-14T01:08:45.008000858Z" level=info msg="StartContainer for \"09a50e4d4d9022eba7a02588cf36e3162bccc50e73d542dc4b1533b01ea19921\" returns successfully" Jan 14 01:08:45.857962 kubelet[2949]: I0114 01:08:45.851914 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5dbqg" podStartSLOduration=4.182009269 podStartE2EDuration="15.851893624s" podCreationTimestamp="2026-01-14 01:08:30 +0000 UTC" firstStartedPulling="2026-01-14 01:08:32.000185732 +0000 UTC m=+4.658633819" lastFinishedPulling="2026-01-14 01:08:43.670070087 +0000 UTC m=+16.328518174" observedRunningTime="2026-01-14 01:08:45.843326925 +0000 UTC m=+18.501775022" watchObservedRunningTime="2026-01-14 01:08:45.851893624 +0000 UTC m=+18.510341711" Jan 14 01:08:54.311468 update_engine[1637]: I20260114 01:08:54.311250 1637 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 01:08:54.311468 update_engine[1637]: I20260114 01:08:54.311460 1637 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 01:08:54.316871 update_engine[1637]: I20260114 01:08:54.314446 1637 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 01:08:54.327902 update_engine[1637]: I20260114 01:08:54.327186 1637 omaha_request_params.cc:62] Current group set to alpha Jan 14 01:08:54.329609 update_engine[1637]: I20260114 01:08:54.329313 1637 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 01:08:54.330079 update_engine[1637]: I20260114 01:08:54.329474 1637 update_attempter.cc:643] Scheduling an action processor start. Jan 14 01:08:54.330079 update_engine[1637]: I20260114 01:08:54.329911 1637 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 01:08:54.330079 update_engine[1637]: I20260114 01:08:54.329972 1637 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 01:08:54.333161 update_engine[1637]: I20260114 01:08:54.330082 1637 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 01:08:54.333161 update_engine[1637]: I20260114 01:08:54.330103 1637 omaha_request_action.cc:272] Request: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: Jan 14 01:08:54.333161 update_engine[1637]: I20260114 01:08:54.330118 1637 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:08:54.334355 locksmithd[1711]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 01:08:54.351025 update_engine[1637]: I20260114 01:08:54.350028 1637 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:08:54.357394 update_engine[1637]: I20260114 01:08:54.357059 1637 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:08:54.373321 update_engine[1637]: E20260114 01:08:54.373201 1637 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:08:54.373321 update_engine[1637]: I20260114 01:08:54.373311 1637 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 01:09:01.914000 audit[1870]: USER_END pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:09:01.915267 sudo[1870]: pam_unix(sudo:session): session closed for user root Jan 14 01:09:01.927893 sshd[1869]: Connection closed by 10.0.0.1 port 37154 Jan 14 01:09:01.926187 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Jan 14 01:09:01.929465 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:09:01.930044 kernel: audit: type=1106 audit(1768352941.914:534): pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:09:01.940359 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:37154.service: Deactivated successfully. Jan 14 01:09:01.949262 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:09:01.950319 systemd[1]: session-8.scope: Consumed 14.958s CPU time, 214.3M memory peak. Jan 14 01:09:01.956461 systemd-logind[1635]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:09:01.967098 systemd-logind[1635]: Removed session 8. Jan 14 01:09:01.915000 audit[1870]: CRED_DISP pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:09:02.048911 kernel: audit: type=1104 audit(1768352941.915:535): pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:09:01.930000 audit[1865]: USER_END pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:09:01.932000 audit[1865]: CRED_DISP pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:09:02.225193 kernel: audit: type=1106 audit(1768352941.930:536): pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:09:02.225315 kernel: audit: type=1104 audit(1768352941.932:537): pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:09:02.225383 kernel: audit: type=1131 audit(1768352941.940:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.95:22-10.0.0.1:37154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:09:01.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.95:22-10.0.0.1:37154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:09:04.304343 update_engine[1637]: I20260114 01:09:04.302831 1637 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:09:04.304343 update_engine[1637]: I20260114 01:09:04.303089 1637 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:09:04.310076 update_engine[1637]: I20260114 01:09:04.308414 1637 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:09:04.338173 update_engine[1637]: E20260114 01:09:04.336298 1637 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:09:04.338474 update_engine[1637]: I20260114 01:09:04.338249 1637 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 01:09:08.075028 kernel: audit: type=1325 audit(1768352948.016:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.016000 audit[3373]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.016000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe51942020 a2=0 a3=7ffe5194200c items=0 ppid=3100 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:08.147435 kernel: audit: type=1300 audit(1768352948.016:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe51942020 a2=0 a3=7ffe5194200c items=0 ppid=3100 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:08.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:08.190060 kernel: audit: type=1327 audit(1768352948.016:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:08.080000 audit[3373]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.236961 kernel: audit: type=1325 audit(1768352948.080:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.237082 kernel: audit: type=1300 audit(1768352948.080:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe51942020 a2=0 a3=0 items=0 ppid=3100 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:08.080000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe51942020 a2=0 a3=0 items=0 ppid=3100 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:08.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:08.346909 kernel: audit: type=1327 audit(1768352948.080:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:08.484000 audit[3375]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.532146 kernel: audit: type=1325 audit(1768352948.484:541): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.484000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcd670d660 a2=0 a3=7ffcd670d64c items=0 ppid=3100 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:08.596159 kernel: audit: type=1300 audit(1768352948.484:541): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcd670d660 a2=0 a3=7ffcd670d64c items=0 ppid=3100 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:08.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:08.549000 audit[3375]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.659156 kernel: audit: type=1327 audit(1768352948.484:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:08.659247 kernel: audit: type=1325 audit(1768352948.549:542): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:08.549000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd670d660 a2=0 a3=0 items=0 ppid=3100 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:08.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:14.309359 update_engine[1637]: I20260114 01:09:14.308266 1637 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:09:14.309359 update_engine[1637]: I20260114 01:09:14.308386 1637 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:09:14.310506 update_engine[1637]: I20260114 01:09:14.309430 1637 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:09:14.338511 update_engine[1637]: E20260114 01:09:14.338336 1637 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:09:14.338511 update_engine[1637]: I20260114 01:09:14.338469 1637 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 01:09:16.228976 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 14 01:09:16.229150 kernel: audit: type=1325 audit(1768352956.175:543): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.175000 audit[3378]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.175000 audit[3378]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffde005c390 a2=0 a3=7ffde005c37c items=0 ppid=3100 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:16.311123 kernel: audit: type=1300 audit(1768352956.175:543): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffde005c390 a2=0 a3=7ffde005c37c items=0 ppid=3100 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:16.311275 kernel: audit: type=1327 audit(1768352956.175:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:16.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:16.243000 audit[3378]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.387953 kernel: audit: type=1325 audit(1768352956.243:544): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.243000 audit[3378]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde005c390 a2=0 a3=0 items=0 ppid=3100 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:16.456008 kernel: audit: type=1300 audit(1768352956.243:544): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde005c390 a2=0 a3=0 items=0 ppid=3100 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:16.456136 kernel: audit: type=1327 audit(1768352956.243:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:16.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:16.692002 kernel: audit: type=1325 audit(1768352956.651:545): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.651000 audit[3380]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.651000 audit[3380]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff68c76210 a2=0 a3=7fff68c761fc items=0 ppid=3100 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:16.774255 kernel: audit: type=1300 audit(1768352956.651:545): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff68c76210 a2=0 a3=7fff68c761fc items=0 ppid=3100 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:16.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:16.786000 audit[3380]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.860486 kernel: audit: type=1327 audit(1768352956.651:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:16.861102 kernel: audit: type=1325 audit(1768352956.786:546): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:16.786000 audit[3380]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff68c76210 a2=0 a3=0 items=0 ppid=3100 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:16.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:18.006000 audit[3382]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:18.006000 audit[3382]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffec22aed40 a2=0 a3=7ffec22aed2c items=0 ppid=3100 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:18.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:18.016000 audit[3382]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:18.016000 audit[3382]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec22aed40 a2=0 a3=0 items=0 ppid=3100 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:18.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:21.426077 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 14 01:09:21.426231 kernel: audit: type=1325 audit(1768352961.398:549): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:21.398000 audit[3384]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:21.398000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcad190cf0 a2=0 a3=7ffcad190cdc items=0 ppid=3100 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.598436 systemd[1]: Created slice kubepods-besteffort-pod879f4058_0721_4bd6_ad99_48cd22966609.slice - libcontainer container kubepods-besteffort-pod879f4058_0721_4bd6_ad99_48cd22966609.slice. Jan 14 01:09:21.617086 kubelet[2949]: I0114 01:09:21.614161 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/879f4058-0721-4bd6-ad99-48cd22966609-typha-certs\") pod \"calico-typha-f46fb7d64-nhlgd\" (UID: \"879f4058-0721-4bd6-ad99-48cd22966609\") " pod="calico-system/calico-typha-f46fb7d64-nhlgd" Jan 14 01:09:21.617086 kubelet[2949]: I0114 01:09:21.614206 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xtp\" (UniqueName: \"kubernetes.io/projected/879f4058-0721-4bd6-ad99-48cd22966609-kube-api-access-h5xtp\") pod \"calico-typha-f46fb7d64-nhlgd\" (UID: \"879f4058-0721-4bd6-ad99-48cd22966609\") " pod="calico-system/calico-typha-f46fb7d64-nhlgd" Jan 14 01:09:21.617086 kubelet[2949]: I0114 01:09:21.614240 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/879f4058-0721-4bd6-ad99-48cd22966609-tigera-ca-bundle\") pod \"calico-typha-f46fb7d64-nhlgd\" (UID: \"879f4058-0721-4bd6-ad99-48cd22966609\") " pod="calico-system/calico-typha-f46fb7d64-nhlgd" Jan 14 01:09:21.758286 kernel: audit: type=1300 audit(1768352961.398:549): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcad190cf0 a2=0 a3=7ffcad190cdc items=0 ppid=3100 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.870200 kernel: audit: type=1327 audit(1768352961.398:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:21.398000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:21.923000 audit[3384]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:21.972939 kernel: audit: type=1325 audit(1768352961.923:550): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:21.923000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcad190cf0 a2=0 a3=0 items=0 ppid=3100 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.084545 kernel: audit: type=1300 audit(1768352961.923:550): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcad190cf0 a2=0 a3=0 items=0 ppid=3100 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.923000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:22.129209 systemd[1]: Created slice kubepods-besteffort-pod7020f34e_5e70_41e4_87b7_9b217db7b4c4.slice - libcontainer container kubepods-besteffort-pod7020f34e_5e70_41e4_87b7_9b217db7b4c4.slice. Jan 14 01:09:22.122000 audit[3388]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:22.171398 kernel: audit: type=1327 audit(1768352961.923:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:22.171509 kernel: audit: type=1325 audit(1768352962.122:551): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:22.174491 kernel: audit: type=1300 audit(1768352962.122:551): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc448be880 a2=0 a3=7ffc448be86c items=0 ppid=3100 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.122000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc448be880 a2=0 a3=7ffc448be86c items=0 ppid=3100 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.202443 kubelet[2949]: I0114 01:09:22.202094 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-cni-log-dir\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.202443 kubelet[2949]: I0114 01:09:22.202444 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-var-lib-calico\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203049 kubelet[2949]: I0114 01:09:22.202479 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-var-run-calico\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203049 kubelet[2949]: I0114 01:09:22.202507 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-cni-net-dir\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203049 kubelet[2949]: I0114 01:09:22.202531 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7020f34e-5e70-41e4-87b7-9b217db7b4c4-tigera-ca-bundle\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203049 kubelet[2949]: I0114 01:09:22.202554 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-flexvol-driver-host\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203049 kubelet[2949]: I0114 01:09:22.203007 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7020f34e-5e70-41e4-87b7-9b217db7b4c4-node-certs\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203232 kubelet[2949]: I0114 01:09:22.203037 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-policysync\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203232 kubelet[2949]: I0114 01:09:22.203061 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bxk\" (UniqueName: \"kubernetes.io/projected/7020f34e-5e70-41e4-87b7-9b217db7b4c4-kube-api-access-24bxk\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203232 kubelet[2949]: I0114 01:09:22.203090 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-cni-bin-dir\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203232 kubelet[2949]: I0114 01:09:22.203112 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-lib-modules\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.203232 kubelet[2949]: I0114 01:09:22.203136 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7020f34e-5e70-41e4-87b7-9b217db7b4c4-xtables-lock\") pod \"calico-node-l6kp8\" (UID: \"7020f34e-5e70-41e4-87b7-9b217db7b4c4\") " pod="calico-system/calico-node-l6kp8" Jan 14 01:09:22.238216 kernel: audit: type=1327 audit(1768352962.122:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:22.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:22.236000 audit[3388]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:22.271437 kubelet[2949]: E0114 01:09:22.271348 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:22.276226 containerd[1661]: time="2026-01-14T01:09:22.275227580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f46fb7d64-nhlgd,Uid:879f4058-0721-4bd6-ad99-48cd22966609,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:22.298321 kernel: audit: type=1325 audit(1768352962.236:552): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:22.298410 kubelet[2949]: E0114 01:09:22.292131 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:22.236000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc448be880 a2=0 a3=0 items=0 ppid=3100 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:22.421147 kubelet[2949]: E0114 01:09:22.419511 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.421147 kubelet[2949]: W0114 01:09:22.420122 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.421147 kubelet[2949]: E0114 01:09:22.420157 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.424041 kubelet[2949]: E0114 01:09:22.422538 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.424041 kubelet[2949]: W0114 01:09:22.422558 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.424041 kubelet[2949]: E0114 01:09:22.423163 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.424041 kubelet[2949]: E0114 01:09:22.423435 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.424041 kubelet[2949]: W0114 01:09:22.423446 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.424041 kubelet[2949]: E0114 01:09:22.423459 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.424483 kubelet[2949]: E0114 01:09:22.424297 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.424483 kubelet[2949]: W0114 01:09:22.424452 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.424483 kubelet[2949]: E0114 01:09:22.424467 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.427353 kubelet[2949]: E0114 01:09:22.425269 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.427353 kubelet[2949]: W0114 01:09:22.425280 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.427353 kubelet[2949]: E0114 01:09:22.425295 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.427353 kubelet[2949]: E0114 01:09:22.426009 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.427353 kubelet[2949]: W0114 01:09:22.426024 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.427353 kubelet[2949]: E0114 01:09:22.426039 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.427353 kubelet[2949]: E0114 01:09:22.426306 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.427353 kubelet[2949]: W0114 01:09:22.426317 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.427353 kubelet[2949]: E0114 01:09:22.426329 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.431232 kubelet[2949]: E0114 01:09:22.430463 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.431232 kubelet[2949]: W0114 01:09:22.431134 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.431232 kubelet[2949]: E0114 01:09:22.431170 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.432367 kubelet[2949]: E0114 01:09:22.432129 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.432367 kubelet[2949]: W0114 01:09:22.432311 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.432367 kubelet[2949]: E0114 01:09:22.432337 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.434281 kubelet[2949]: E0114 01:09:22.434075 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.434281 kubelet[2949]: W0114 01:09:22.434250 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.434281 kubelet[2949]: E0114 01:09:22.434273 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.437020 kubelet[2949]: E0114 01:09:22.435035 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.437020 kubelet[2949]: W0114 01:09:22.435061 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.437020 kubelet[2949]: E0114 01:09:22.435077 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.437020 kubelet[2949]: E0114 01:09:22.435342 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.437020 kubelet[2949]: W0114 01:09:22.435353 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.437020 kubelet[2949]: E0114 01:09:22.435363 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.437020 kubelet[2949]: E0114 01:09:22.436050 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.437020 kubelet[2949]: W0114 01:09:22.436062 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.437020 kubelet[2949]: E0114 01:09:22.436074 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.437020 kubelet[2949]: E0114 01:09:22.436318 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.437409 kubelet[2949]: W0114 01:09:22.436329 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.437409 kubelet[2949]: E0114 01:09:22.436340 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.437409 kubelet[2949]: E0114 01:09:22.437072 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.437409 kubelet[2949]: W0114 01:09:22.437086 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.437409 kubelet[2949]: E0114 01:09:22.437097 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.437409 kubelet[2949]: E0114 01:09:22.437346 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.437409 kubelet[2949]: W0114 01:09:22.437360 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.437409 kubelet[2949]: E0114 01:09:22.437374 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.438396 kubelet[2949]: E0114 01:09:22.438216 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.438396 kubelet[2949]: W0114 01:09:22.438383 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.438396 kubelet[2949]: E0114 01:09:22.438395 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.440124 kubelet[2949]: E0114 01:09:22.439301 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.440124 kubelet[2949]: W0114 01:09:22.440071 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.440124 kubelet[2949]: E0114 01:09:22.440092 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.443056 kubelet[2949]: E0114 01:09:22.441393 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.443056 kubelet[2949]: W0114 01:09:22.441415 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.443056 kubelet[2949]: E0114 01:09:22.441428 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.443056 kubelet[2949]: I0114 01:09:22.441453 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1036b5d9-9d65-4e70-adc3-802295ee7a1e-registration-dir\") pod \"csi-node-driver-tbvx7\" (UID: \"1036b5d9-9d65-4e70-adc3-802295ee7a1e\") " pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:09:22.443226 kubelet[2949]: E0114 01:09:22.443133 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.443226 kubelet[2949]: W0114 01:09:22.443146 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.443226 kubelet[2949]: E0114 01:09:22.443155 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.443226 kubelet[2949]: I0114 01:09:22.443173 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1036b5d9-9d65-4e70-adc3-802295ee7a1e-socket-dir\") pod \"csi-node-driver-tbvx7\" (UID: \"1036b5d9-9d65-4e70-adc3-802295ee7a1e\") " pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:09:22.449032 kubelet[2949]: E0114 01:09:22.448265 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.449032 kubelet[2949]: W0114 01:09:22.448286 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.449032 kubelet[2949]: E0114 01:09:22.448302 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.449032 kubelet[2949]: I0114 01:09:22.448324 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1036b5d9-9d65-4e70-adc3-802295ee7a1e-kubelet-dir\") pod \"csi-node-driver-tbvx7\" (UID: \"1036b5d9-9d65-4e70-adc3-802295ee7a1e\") " pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:09:22.450380 kubelet[2949]: E0114 01:09:22.450073 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.450380 kubelet[2949]: W0114 01:09:22.450244 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.450380 kubelet[2949]: E0114 01:09:22.450260 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.454076 kubelet[2949]: I0114 01:09:22.451068 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1036b5d9-9d65-4e70-adc3-802295ee7a1e-varrun\") pod \"csi-node-driver-tbvx7\" (UID: \"1036b5d9-9d65-4e70-adc3-802295ee7a1e\") " pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:09:22.454076 kubelet[2949]: E0114 01:09:22.451440 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.454076 kubelet[2949]: W0114 01:09:22.451456 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.454076 kubelet[2949]: E0114 01:09:22.451469 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.503357 kubelet[2949]: E0114 01:09:22.502477 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.514275 kubelet[2949]: W0114 01:09:22.513257 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.514275 kubelet[2949]: E0114 01:09:22.513447 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.524089 kubelet[2949]: E0114 01:09:22.523173 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.524089 kubelet[2949]: W0114 01:09:22.523235 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.524089 kubelet[2949]: E0114 01:09:22.523257 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.525457 kubelet[2949]: E0114 01:09:22.524388 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.525457 kubelet[2949]: W0114 01:09:22.524402 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.525457 kubelet[2949]: E0114 01:09:22.524418 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.537025 kubelet[2949]: E0114 01:09:22.532167 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.537025 kubelet[2949]: W0114 01:09:22.532228 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.537025 kubelet[2949]: E0114 01:09:22.532245 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.539337 kubelet[2949]: E0114 01:09:22.538241 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.539337 kubelet[2949]: W0114 01:09:22.538267 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.539337 kubelet[2949]: E0114 01:09:22.538281 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.543047 containerd[1661]: time="2026-01-14T01:09:22.542433640Z" level=info msg="connecting to shim 65fc5f740289df4ce76ae9e30a05b17372cd5dd3570472a218d7b4d17ab37975" address="unix:///run/containerd/s/c0ce2c13aa453824772892f3e19bc7217dd72ab466f96a0e41081337b35a5ff2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:22.552209 kubelet[2949]: E0114 01:09:22.551298 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.552209 kubelet[2949]: W0114 01:09:22.551320 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.552209 kubelet[2949]: E0114 01:09:22.551339 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.554234 kubelet[2949]: E0114 01:09:22.553044 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.554234 kubelet[2949]: W0114 01:09:22.553058 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.554234 kubelet[2949]: E0114 01:09:22.553072 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.562087 kubelet[2949]: E0114 01:09:22.561524 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.562087 kubelet[2949]: W0114 01:09:22.561547 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.562087 kubelet[2949]: E0114 01:09:22.561560 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.568998 kubelet[2949]: E0114 01:09:22.568100 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.568998 kubelet[2949]: W0114 01:09:22.568259 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.568998 kubelet[2949]: E0114 01:09:22.568272 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.583296 kubelet[2949]: E0114 01:09:22.582413 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.583296 kubelet[2949]: W0114 01:09:22.582434 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.583296 kubelet[2949]: E0114 01:09:22.582449 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.594940 kubelet[2949]: E0114 01:09:22.589121 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.594940 kubelet[2949]: W0114 01:09:22.589142 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.594940 kubelet[2949]: E0114 01:09:22.589157 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.594940 kubelet[2949]: E0114 01:09:22.593135 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.594940 kubelet[2949]: W0114 01:09:22.593149 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.594940 kubelet[2949]: E0114 01:09:22.593162 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.594940 kubelet[2949]: E0114 01:09:22.594077 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.594940 kubelet[2949]: W0114 01:09:22.594089 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.594940 kubelet[2949]: E0114 01:09:22.594101 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.597563 kubelet[2949]: E0114 01:09:22.597257 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.597563 kubelet[2949]: W0114 01:09:22.597432 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.597563 kubelet[2949]: E0114 01:09:22.597450 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.606490 kubelet[2949]: E0114 01:09:22.606455 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.607173 kubelet[2949]: W0114 01:09:22.607143 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.607275 kubelet[2949]: E0114 01:09:22.607255 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.616358 kubelet[2949]: E0114 01:09:22.611241 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.616358 kubelet[2949]: W0114 01:09:22.611275 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.616358 kubelet[2949]: E0114 01:09:22.611307 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.619426 kubelet[2949]: E0114 01:09:22.619126 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.619426 kubelet[2949]: W0114 01:09:22.619151 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.619426 kubelet[2949]: E0114 01:09:22.619191 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.622449 kubelet[2949]: E0114 01:09:22.620935 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.622449 kubelet[2949]: W0114 01:09:22.620947 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.622449 kubelet[2949]: E0114 01:09:22.620958 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.627279 kubelet[2949]: E0114 01:09:22.624271 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.627279 kubelet[2949]: W0114 01:09:22.624290 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.627279 kubelet[2949]: E0114 01:09:22.624300 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.627279 kubelet[2949]: E0114 01:09:22.626404 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.627279 kubelet[2949]: W0114 01:09:22.626424 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.627279 kubelet[2949]: E0114 01:09:22.626443 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.630566 kubelet[2949]: E0114 01:09:22.630269 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.632516 kubelet[2949]: W0114 01:09:22.632332 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.632516 kubelet[2949]: E0114 01:09:22.632362 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.637453 kubelet[2949]: E0114 01:09:22.637106 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.637453 kubelet[2949]: W0114 01:09:22.637266 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.637453 kubelet[2949]: E0114 01:09:22.637298 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.638305 kubelet[2949]: I0114 01:09:22.638013 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jch8s\" (UniqueName: \"kubernetes.io/projected/1036b5d9-9d65-4e70-adc3-802295ee7a1e-kube-api-access-jch8s\") pod \"csi-node-driver-tbvx7\" (UID: \"1036b5d9-9d65-4e70-adc3-802295ee7a1e\") " pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:09:22.642190 kubelet[2949]: E0114 01:09:22.642002 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.642190 kubelet[2949]: W0114 01:09:22.642167 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.642190 kubelet[2949]: E0114 01:09:22.642192 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.643522 kubelet[2949]: E0114 01:09:22.643327 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.643522 kubelet[2949]: W0114 01:09:22.643485 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.643522 kubelet[2949]: E0114 01:09:22.643502 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.651139 kubelet[2949]: E0114 01:09:22.650504 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.651139 kubelet[2949]: W0114 01:09:22.651111 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.651139 kubelet[2949]: E0114 01:09:22.651135 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.655442 kubelet[2949]: E0114 01:09:22.655144 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.655442 kubelet[2949]: W0114 01:09:22.655166 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.655442 kubelet[2949]: E0114 01:09:22.655183 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.658222 kubelet[2949]: E0114 01:09:22.658082 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.658222 kubelet[2949]: W0114 01:09:22.658106 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.658222 kubelet[2949]: E0114 01:09:22.658122 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.659436 kubelet[2949]: E0114 01:09:22.659410 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.659436 kubelet[2949]: W0114 01:09:22.659432 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.659552 kubelet[2949]: E0114 01:09:22.659447 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.661262 kubelet[2949]: E0114 01:09:22.660546 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.663057 kubelet[2949]: W0114 01:09:22.662064 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.663057 kubelet[2949]: E0114 01:09:22.662090 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.666248 kubelet[2949]: E0114 01:09:22.665550 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.666248 kubelet[2949]: W0114 01:09:22.665566 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.666248 kubelet[2949]: E0114 01:09:22.666009 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.672295 kubelet[2949]: E0114 01:09:22.671347 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.673231 kubelet[2949]: W0114 01:09:22.673112 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.673231 kubelet[2949]: E0114 01:09:22.673144 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.677952 kubelet[2949]: E0114 01:09:22.677402 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.677952 kubelet[2949]: W0114 01:09:22.677429 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.677952 kubelet[2949]: E0114 01:09:22.677448 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.678439 kubelet[2949]: E0114 01:09:22.678244 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.678439 kubelet[2949]: W0114 01:09:22.678408 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.678439 kubelet[2949]: E0114 01:09:22.678425 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.683291 kubelet[2949]: E0114 01:09:22.682392 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.683291 kubelet[2949]: W0114 01:09:22.682556 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.683291 kubelet[2949]: E0114 01:09:22.682997 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.693324 kubelet[2949]: E0114 01:09:22.693058 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.693324 kubelet[2949]: W0114 01:09:22.693081 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.693324 kubelet[2949]: E0114 01:09:22.693101 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.741383 kubelet[2949]: E0114 01:09:22.740299 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:22.747220 containerd[1661]: time="2026-01-14T01:09:22.747183644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6kp8,Uid:7020f34e-5e70-41e4-87b7-9b217db7b4c4,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:22.749407 kubelet[2949]: E0114 01:09:22.747564 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.749407 kubelet[2949]: W0114 01:09:22.748499 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.749407 kubelet[2949]: E0114 01:09:22.748529 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.753224 kubelet[2949]: E0114 01:09:22.750423 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.753224 kubelet[2949]: W0114 01:09:22.750444 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.753224 kubelet[2949]: E0114 01:09:22.750466 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.753224 kubelet[2949]: E0114 01:09:22.751253 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.753224 kubelet[2949]: W0114 01:09:22.751269 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.753224 kubelet[2949]: E0114 01:09:22.751285 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.753224 kubelet[2949]: E0114 01:09:22.751553 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.753224 kubelet[2949]: W0114 01:09:22.751567 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.753224 kubelet[2949]: E0114 01:09:22.752067 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.753224 kubelet[2949]: E0114 01:09:22.752361 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.753565 kubelet[2949]: W0114 01:09:22.752373 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.753565 kubelet[2949]: E0114 01:09:22.752385 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.800535 systemd[1]: Started cri-containerd-65fc5f740289df4ce76ae9e30a05b17372cd5dd3570472a218d7b4d17ab37975.scope - libcontainer container 65fc5f740289df4ce76ae9e30a05b17372cd5dd3570472a218d7b4d17ab37975. Jan 14 01:09:22.834107 kubelet[2949]: E0114 01:09:22.834045 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:22.834107 kubelet[2949]: W0114 01:09:22.834071 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:22.834107 kubelet[2949]: E0114 01:09:22.834093 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:22.903000 audit: BPF prog-id=154 op=LOAD Jan 14 01:09:22.910000 audit: BPF prog-id=155 op=LOAD Jan 14 01:09:22.910000 audit[3469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3424 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635666335663734303238396466346365373661653965333061303562 Jan 14 01:09:22.911000 audit: BPF prog-id=155 op=UNLOAD Jan 14 01:09:22.911000 audit[3469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635666335663734303238396466346365373661653965333061303562 Jan 14 01:09:22.912000 audit: BPF prog-id=156 op=LOAD Jan 14 01:09:22.912000 audit[3469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3424 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635666335663734303238396466346365373661653965333061303562 Jan 14 01:09:22.912000 audit: BPF prog-id=157 op=LOAD Jan 14 01:09:22.912000 audit[3469]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3424 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635666335663734303238396466346365373661653965333061303562 Jan 14 01:09:22.913000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:09:22.913000 audit[3469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635666335663734303238396466346365373661653965333061303562 Jan 14 01:09:22.913000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:09:22.913000 audit[3469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635666335663734303238396466346365373661653965333061303562 Jan 14 01:09:22.913000 audit: BPF prog-id=158 op=LOAD Jan 14 01:09:22.913000 audit[3469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3424 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:22.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635666335663734303238396466346365373661653965333061303562 Jan 14 01:09:23.004201 containerd[1661]: time="2026-01-14T01:09:23.003453710Z" level=info msg="connecting to shim 23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f" address="unix:///run/containerd/s/8468593821133bdf76f85428454cd58917bdcebdd39eea30032871f2f0bbdc6b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:23.269166 containerd[1661]: time="2026-01-14T01:09:23.266469589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f46fb7d64-nhlgd,Uid:879f4058-0721-4bd6-ad99-48cd22966609,Namespace:calico-system,Attempt:0,} returns sandbox id \"65fc5f740289df4ce76ae9e30a05b17372cd5dd3570472a218d7b4d17ab37975\"" Jan 14 01:09:23.293343 kubelet[2949]: E0114 01:09:23.292348 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:23.304335 containerd[1661]: time="2026-01-14T01:09:23.304299757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:09:23.349543 systemd[1]: Started cri-containerd-23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f.scope - libcontainer container 23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f. Jan 14 01:09:23.513000 audit: BPF prog-id=159 op=LOAD Jan 14 01:09:23.520000 audit: BPF prog-id=160 op=LOAD Jan 14 01:09:23.520000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3509 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343937663337306664396465383135613437386339623931303632 Jan 14 01:09:23.524000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:09:23.524000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343937663337306664396465383135613437386339623931303632 Jan 14 01:09:23.525000 audit: BPF prog-id=161 op=LOAD Jan 14 01:09:23.525000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3509 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343937663337306664396465383135613437386339623931303632 Jan 14 01:09:23.525000 audit: BPF prog-id=162 op=LOAD Jan 14 01:09:23.525000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3509 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343937663337306664396465383135613437386339623931303632 Jan 14 01:09:23.528000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:09:23.528000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343937663337306664396465383135613437386339623931303632 Jan 14 01:09:23.528000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:09:23.528000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343937663337306664396465383135613437386339623931303632 Jan 14 01:09:23.533000 audit: BPF prog-id=163 op=LOAD Jan 14 01:09:23.533000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3509 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233343937663337306664396465383135613437386339623931303632 Jan 14 01:09:23.782452 containerd[1661]: time="2026-01-14T01:09:23.778331477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6kp8,Uid:7020f34e-5e70-41e4-87b7-9b217db7b4c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f\"" Jan 14 01:09:23.782563 kubelet[2949]: E0114 01:09:23.780166 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:24.234448 kubelet[2949]: E0114 01:09:24.232399 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:24.300267 update_engine[1637]: I20260114 01:09:24.300194 1637 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:09:24.303385 update_engine[1637]: I20260114 01:09:24.301242 1637 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:09:24.303385 update_engine[1637]: I20260114 01:09:24.303328 1637 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:09:24.326144 update_engine[1637]: E20260114 01:09:24.325400 1637 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:09:24.326144 update_engine[1637]: I20260114 01:09:24.326037 1637 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 01:09:24.326144 update_engine[1637]: I20260114 01:09:24.326057 1637 omaha_request_action.cc:617] Omaha request response: Jan 14 01:09:24.326357 update_engine[1637]: E20260114 01:09:24.326177 1637 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 01:09:24.326357 update_engine[1637]: I20260114 01:09:24.326210 1637 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 01:09:24.326357 update_engine[1637]: I20260114 01:09:24.326222 1637 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 01:09:24.326357 update_engine[1637]: I20260114 01:09:24.326231 1637 update_attempter.cc:306] Processing Done. Jan 14 01:09:24.326357 update_engine[1637]: E20260114 01:09:24.326251 1637 update_attempter.cc:619] Update failed. Jan 14 01:09:24.326357 update_engine[1637]: I20260114 01:09:24.326261 1637 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 01:09:24.326357 update_engine[1637]: I20260114 01:09:24.326270 1637 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 01:09:24.326357 update_engine[1637]: I20260114 01:09:24.326282 1637 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 01:09:24.327050 update_engine[1637]: I20260114 01:09:24.326363 1637 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 01:09:24.327050 update_engine[1637]: I20260114 01:09:24.326395 1637 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 01:09:24.327050 update_engine[1637]: I20260114 01:09:24.326406 1637 omaha_request_action.cc:272] Request: Jan 14 01:09:24.327050 update_engine[1637]: Jan 14 01:09:24.327050 update_engine[1637]: Jan 14 01:09:24.327050 update_engine[1637]: Jan 14 01:09:24.327050 update_engine[1637]: Jan 14 01:09:24.327050 update_engine[1637]: Jan 14 01:09:24.327050 update_engine[1637]: Jan 14 01:09:24.327050 update_engine[1637]: I20260114 01:09:24.326418 1637 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:09:24.327050 update_engine[1637]: I20260114 01:09:24.326447 1637 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:09:24.327354 locksmithd[1711]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 01:09:24.332947 update_engine[1637]: I20260114 01:09:24.332888 1637 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:09:24.349047 update_engine[1637]: E20260114 01:09:24.347314 1637 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:09:24.349047 update_engine[1637]: I20260114 01:09:24.347517 1637 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 01:09:24.349047 update_engine[1637]: I20260114 01:09:24.347529 1637 omaha_request_action.cc:617] Omaha request response: Jan 14 01:09:24.349047 update_engine[1637]: I20260114 01:09:24.347539 1637 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 01:09:24.349047 update_engine[1637]: I20260114 01:09:24.347548 1637 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 01:09:24.349047 update_engine[1637]: I20260114 01:09:24.347554 1637 update_attempter.cc:306] Processing Done. Jan 14 01:09:24.349047 update_engine[1637]: I20260114 01:09:24.347563 1637 update_attempter.cc:310] Error event sent. Jan 14 01:09:24.349047 update_engine[1637]: I20260114 01:09:24.347571 1637 update_check_scheduler.cc:74] Next update check in 49m8s Jan 14 01:09:24.354158 locksmithd[1711]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 01:09:24.501167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336559889.mount: Deactivated successfully. Jan 14 01:09:26.220297 kubelet[2949]: E0114 01:09:26.219459 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:28.225200 kubelet[2949]: E0114 01:09:28.222204 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:30.225537 kubelet[2949]: E0114 01:09:30.218159 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:32.245939 kubelet[2949]: E0114 01:09:32.245246 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:33.534759 containerd[1661]: time="2026-01-14T01:09:33.533557670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:33.541513 containerd[1661]: time="2026-01-14T01:09:33.541480603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 01:09:33.547027 containerd[1661]: time="2026-01-14T01:09:33.546852768Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:33.553442 containerd[1661]: time="2026-01-14T01:09:33.553275704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:33.554138 containerd[1661]: time="2026-01-14T01:09:33.554027697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 10.24865586s" Jan 14 01:09:33.554207 containerd[1661]: time="2026-01-14T01:09:33.554139555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:09:33.560895 containerd[1661]: time="2026-01-14T01:09:33.560432170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:09:33.617432 containerd[1661]: time="2026-01-14T01:09:33.617357624Z" level=info msg="CreateContainer within sandbox \"65fc5f740289df4ce76ae9e30a05b17372cd5dd3570472a218d7b4d17ab37975\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:09:33.644387 containerd[1661]: time="2026-01-14T01:09:33.643405580Z" level=info msg="Container 195a0d2090133a8a84552533530469e274a456af27dd2501220277ab7e0adb3c: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:09:33.688185 containerd[1661]: time="2026-01-14T01:09:33.688120122Z" level=info msg="CreateContainer within sandbox \"65fc5f740289df4ce76ae9e30a05b17372cd5dd3570472a218d7b4d17ab37975\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"195a0d2090133a8a84552533530469e274a456af27dd2501220277ab7e0adb3c\"" Jan 14 01:09:33.690879 containerd[1661]: time="2026-01-14T01:09:33.690844286Z" level=info msg="StartContainer for \"195a0d2090133a8a84552533530469e274a456af27dd2501220277ab7e0adb3c\"" Jan 14 01:09:33.694953 containerd[1661]: time="2026-01-14T01:09:33.693973316Z" level=info msg="connecting to shim 195a0d2090133a8a84552533530469e274a456af27dd2501220277ab7e0adb3c" address="unix:///run/containerd/s/c0ce2c13aa453824772892f3e19bc7217dd72ab466f96a0e41081337b35a5ff2" protocol=ttrpc version=3 Jan 14 01:09:33.762159 systemd[1]: Started cri-containerd-195a0d2090133a8a84552533530469e274a456af27dd2501220277ab7e0adb3c.scope - libcontainer container 195a0d2090133a8a84552533530469e274a456af27dd2501220277ab7e0adb3c. Jan 14 01:09:33.809000 audit: BPF prog-id=164 op=LOAD Jan 14 01:09:33.822202 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 14 01:09:33.822406 kernel: audit: type=1334 audit(1768352973.809:569): prog-id=164 op=LOAD Jan 14 01:09:33.811000 audit: BPF prog-id=165 op=LOAD Jan 14 01:09:33.836407 kernel: audit: type=1334 audit(1768352973.811:570): prog-id=165 op=LOAD Jan 14 01:09:33.811000 audit[3567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.868940 kernel: audit: type=1300 audit(1768352973.811:570): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.898838 kernel: audit: type=1327 audit(1768352973.811:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.811000 audit: BPF prog-id=165 op=UNLOAD Jan 14 01:09:33.908174 kernel: audit: type=1334 audit(1768352973.811:571): prog-id=165 op=UNLOAD Jan 14 01:09:33.811000 audit[3567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.973343 kernel: audit: type=1300 audit(1768352973.811:571): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.973462 kernel: audit: type=1327 audit(1768352973.811:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.811000 audit: BPF prog-id=166 op=LOAD Jan 14 01:09:33.811000 audit[3567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.005294 containerd[1661]: time="2026-01-14T01:09:34.004942777Z" level=info msg="StartContainer for \"195a0d2090133a8a84552533530469e274a456af27dd2501220277ab7e0adb3c\" returns successfully" Jan 14 01:09:34.020571 kernel: audit: type=1334 audit(1768352973.811:572): prog-id=166 op=LOAD Jan 14 01:09:34.020884 kernel: audit: type=1300 audit(1768352973.811:572): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.022057 kernel: audit: type=1327 audit(1768352973.811:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.811000 audit: BPF prog-id=167 op=LOAD Jan 14 01:09:33.811000 audit[3567]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.811000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:09:33.811000 audit[3567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.811000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:09:33.811000 audit[3567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:33.811000 audit: BPF prog-id=168 op=LOAD Jan 14 01:09:33.811000 audit[3567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3424 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:33.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139356130643230393031333361386138343535323533333533303436 Jan 14 01:09:34.220074 kubelet[2949]: E0114 01:09:34.219360 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:34.347064 containerd[1661]: time="2026-01-14T01:09:34.346191473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:34.351450 containerd[1661]: time="2026-01-14T01:09:34.351415922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:34.354839 containerd[1661]: time="2026-01-14T01:09:34.354806539Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:34.362941 containerd[1661]: time="2026-01-14T01:09:34.362902436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:34.371439 containerd[1661]: time="2026-01-14T01:09:34.371399012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 810.927419ms" Jan 14 01:09:34.371883 containerd[1661]: time="2026-01-14T01:09:34.371596872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:09:34.404590 containerd[1661]: time="2026-01-14T01:09:34.403881893Z" level=info msg="CreateContainer within sandbox \"23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:09:34.481289 containerd[1661]: time="2026-01-14T01:09:34.480488747Z" level=info msg="Container 5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:09:34.503927 containerd[1661]: time="2026-01-14T01:09:34.503880317Z" level=info msg="CreateContainer within sandbox \"23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1\"" Jan 14 01:09:34.509417 containerd[1661]: time="2026-01-14T01:09:34.509254729Z" level=info msg="StartContainer for \"5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1\"" Jan 14 01:09:34.522196 containerd[1661]: time="2026-01-14T01:09:34.521556344Z" level=info msg="connecting to shim 5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1" address="unix:///run/containerd/s/8468593821133bdf76f85428454cd58917bdcebdd39eea30032871f2f0bbdc6b" protocol=ttrpc version=3 Jan 14 01:09:34.579129 systemd[1]: Started cri-containerd-5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1.scope - libcontainer container 5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1. Jan 14 01:09:34.589420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876253666.mount: Deactivated successfully. Jan 14 01:09:34.652287 kubelet[2949]: E0114 01:09:34.652041 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:34.742295 kubelet[2949]: I0114 01:09:34.741162 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f46fb7d64-nhlgd" podStartSLOduration=3.484587327 podStartE2EDuration="13.741146884s" podCreationTimestamp="2026-01-14 01:09:21 +0000 UTC" firstStartedPulling="2026-01-14 01:09:23.303290054 +0000 UTC m=+55.961738141" lastFinishedPulling="2026-01-14 01:09:33.559849611 +0000 UTC m=+66.218297698" observedRunningTime="2026-01-14 01:09:34.728301485 +0000 UTC m=+67.386749572" watchObservedRunningTime="2026-01-14 01:09:34.741146884 +0000 UTC m=+67.399594981" Jan 14 01:09:34.742295 kubelet[2949]: E0114 01:09:34.740605 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.742295 kubelet[2949]: W0114 01:09:34.741368 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.742295 kubelet[2949]: E0114 01:09:34.741387 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.747845 kubelet[2949]: E0114 01:09:34.747473 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.747845 kubelet[2949]: W0114 01:09:34.747491 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.747845 kubelet[2949]: E0114 01:09:34.747508 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.748491 kubelet[2949]: E0114 01:09:34.748475 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.748582 kubelet[2949]: W0114 01:09:34.748542 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.748865 kubelet[2949]: E0114 01:09:34.748842 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.749601 kubelet[2949]: E0114 01:09:34.749585 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.750013 kubelet[2949]: W0114 01:09:34.749858 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.750013 kubelet[2949]: E0114 01:09:34.749882 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.750834 kubelet[2949]: E0114 01:09:34.750813 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.750922 kubelet[2949]: W0114 01:09:34.750906 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.751003 kubelet[2949]: E0114 01:09:34.750986 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.752208 kubelet[2949]: E0114 01:09:34.752189 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.752284 kubelet[2949]: W0114 01:09:34.752269 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.752381 kubelet[2949]: E0114 01:09:34.752361 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.755301 kubelet[2949]: E0114 01:09:34.755285 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.755379 kubelet[2949]: W0114 01:09:34.755366 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.755464 kubelet[2949]: E0114 01:09:34.755446 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.756120 kubelet[2949]: E0114 01:09:34.756071 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.756120 kubelet[2949]: W0114 01:09:34.756086 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.756120 kubelet[2949]: E0114 01:09:34.756099 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.759171 kubelet[2949]: E0114 01:09:34.759063 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.759171 kubelet[2949]: W0114 01:09:34.759087 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.759171 kubelet[2949]: E0114 01:09:34.759104 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.760106 kubelet[2949]: E0114 01:09:34.760087 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.760290 kubelet[2949]: W0114 01:09:34.760198 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.760290 kubelet[2949]: E0114 01:09:34.760223 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.761506 kubelet[2949]: E0114 01:09:34.761491 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.761578 kubelet[2949]: W0114 01:09:34.761566 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.761967 kubelet[2949]: E0114 01:09:34.761912 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.763042 kubelet[2949]: E0114 01:09:34.762930 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.763042 kubelet[2949]: W0114 01:09:34.762950 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.763042 kubelet[2949]: E0114 01:09:34.762966 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.765203 kubelet[2949]: E0114 01:09:34.765186 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.765296 kubelet[2949]: W0114 01:09:34.765279 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.765392 kubelet[2949]: E0114 01:09:34.765372 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.766438 kubelet[2949]: E0114 01:09:34.766290 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.766438 kubelet[2949]: W0114 01:09:34.766370 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.766438 kubelet[2949]: E0114 01:09:34.766382 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.768361 kubelet[2949]: E0114 01:09:34.768345 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.768788 kubelet[2949]: W0114 01:09:34.768425 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.768788 kubelet[2949]: E0114 01:09:34.768440 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.804000 audit: BPF prog-id=169 op=LOAD Jan 14 01:09:34.804000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3509 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566353931316334323339303132323964313864613063306166326330 Jan 14 01:09:34.804000 audit: BPF prog-id=170 op=LOAD Jan 14 01:09:34.804000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3509 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566353931316334323339303132323964313864613063306166326330 Jan 14 01:09:34.804000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:09:34.804000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566353931316334323339303132323964313864613063306166326330 Jan 14 01:09:34.804000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:09:34.804000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566353931316334323339303132323964313864613063306166326330 Jan 14 01:09:34.804000 audit: BPF prog-id=171 op=LOAD Jan 14 01:09:34.804000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3509 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566353931316334323339303132323964313864613063306166326330 Jan 14 01:09:34.817973 kubelet[2949]: E0114 01:09:34.817034 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.818253 kubelet[2949]: W0114 01:09:34.818060 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.818253 kubelet[2949]: E0114 01:09:34.818180 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.827886 kubelet[2949]: E0114 01:09:34.823170 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.827886 kubelet[2949]: W0114 01:09:34.823191 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.827886 kubelet[2949]: E0114 01:09:34.823213 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.827886 kubelet[2949]: E0114 01:09:34.824368 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.827886 kubelet[2949]: W0114 01:09:34.824383 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.827886 kubelet[2949]: E0114 01:09:34.824401 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.828337 kubelet[2949]: E0114 01:09:34.828233 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.828337 kubelet[2949]: W0114 01:09:34.828255 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.828337 kubelet[2949]: E0114 01:09:34.828273 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.831579 kubelet[2949]: E0114 01:09:34.831158 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.831579 kubelet[2949]: W0114 01:09:34.831259 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.831579 kubelet[2949]: E0114 01:09:34.831279 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.834377 kubelet[2949]: E0114 01:09:34.834219 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.834377 kubelet[2949]: W0114 01:09:34.834314 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.834377 kubelet[2949]: E0114 01:09:34.834333 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.837809 kubelet[2949]: E0114 01:09:34.836234 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.837809 kubelet[2949]: W0114 01:09:34.836823 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.837809 kubelet[2949]: E0114 01:09:34.836845 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.837809 kubelet[2949]: E0114 01:09:34.837600 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.837809 kubelet[2949]: W0114 01:09:34.837812 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.838016 kubelet[2949]: E0114 01:09:34.837830 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.842404 kubelet[2949]: E0114 01:09:34.842366 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.842404 kubelet[2949]: W0114 01:09:34.842384 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.842404 kubelet[2949]: E0114 01:09:34.842399 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.844546 kubelet[2949]: E0114 01:09:34.844275 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.844546 kubelet[2949]: W0114 01:09:34.844357 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.844546 kubelet[2949]: E0114 01:09:34.844374 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.847853 kubelet[2949]: E0114 01:09:34.846327 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.847853 kubelet[2949]: W0114 01:09:34.846420 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.847853 kubelet[2949]: E0114 01:09:34.846439 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.847853 kubelet[2949]: E0114 01:09:34.847445 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.847853 kubelet[2949]: W0114 01:09:34.847456 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.847853 kubelet[2949]: E0114 01:09:34.847468 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.849002 kubelet[2949]: E0114 01:09:34.848573 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.849066 kubelet[2949]: W0114 01:09:34.848880 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.849109 kubelet[2949]: E0114 01:09:34.849065 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.851446 kubelet[2949]: E0114 01:09:34.851308 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.851516 kubelet[2949]: W0114 01:09:34.851504 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.851566 kubelet[2949]: E0114 01:09:34.851521 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.860788 kubelet[2949]: E0114 01:09:34.860405 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.860788 kubelet[2949]: W0114 01:09:34.860425 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.860788 kubelet[2949]: E0114 01:09:34.860445 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.863907 kubelet[2949]: E0114 01:09:34.862862 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.863907 kubelet[2949]: W0114 01:09:34.862881 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.863907 kubelet[2949]: E0114 01:09:34.862899 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.864423 kubelet[2949]: E0114 01:09:34.864329 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.864423 kubelet[2949]: W0114 01:09:34.864350 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.864423 kubelet[2949]: E0114 01:09:34.864364 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.870250 kubelet[2949]: E0114 01:09:34.867951 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:09:34.870250 kubelet[2949]: W0114 01:09:34.867970 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:09:34.870250 kubelet[2949]: E0114 01:09:34.867982 2949 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:09:34.872000 audit[3669]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:34.872000 audit[3669]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc14672340 a2=0 a3=7ffc1467232c items=0 ppid=3100 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:34.878000 audit[3669]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:34.878000 audit[3669]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc14672340 a2=0 a3=7ffc1467232c items=0 ppid=3100 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:34.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:34.952496 containerd[1661]: time="2026-01-14T01:09:34.952349342Z" level=info msg="StartContainer for \"5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1\" returns successfully" Jan 14 01:09:34.964817 systemd[1]: cri-containerd-5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1.scope: Deactivated successfully. Jan 14 01:09:34.965452 systemd[1]: cri-containerd-5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1.scope: Consumed 129ms CPU time, 6.5M memory peak, 4K read from disk, 3.1M written to disk. Jan 14 01:09:34.971000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:09:34.980269 containerd[1661]: time="2026-01-14T01:09:34.979961066Z" level=info msg="received container exit event container_id:\"5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1\" id:\"5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1\" pid:3626 exited_at:{seconds:1768352974 nanos:976918678}" Jan 14 01:09:35.057923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f5911c423901229d18da0c0af2c0b4a22fc51571066fa64356c0eba733202d1-rootfs.mount: Deactivated successfully. Jan 14 01:09:35.219556 kubelet[2949]: E0114 01:09:35.219147 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:35.694921 kubelet[2949]: E0114 01:09:35.690960 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:35.694921 kubelet[2949]: E0114 01:09:35.691837 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:35.697903 containerd[1661]: time="2026-01-14T01:09:35.695328207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:09:36.238071 kubelet[2949]: E0114 01:09:36.236290 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:36.741182 kubelet[2949]: E0114 01:09:36.739033 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:38.228260 kubelet[2949]: E0114 01:09:38.228214 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:40.226993 kubelet[2949]: E0114 01:09:40.226921 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:42.225260 kubelet[2949]: E0114 01:09:42.225096 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:44.221840 kubelet[2949]: E0114 01:09:44.219917 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:45.010878 containerd[1661]: time="2026-01-14T01:09:45.010549792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:45.020385 containerd[1661]: time="2026-01-14T01:09:45.020157433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70443237" Jan 14 01:09:45.024580 containerd[1661]: time="2026-01-14T01:09:45.024469129Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:45.030251 containerd[1661]: time="2026-01-14T01:09:45.030002384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:45.031945 containerd[1661]: time="2026-01-14T01:09:45.031534323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 9.336169358s" Jan 14 01:09:45.033590 containerd[1661]: time="2026-01-14T01:09:45.033373595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:09:45.082502 containerd[1661]: time="2026-01-14T01:09:45.082354734Z" level=info msg="CreateContainer within sandbox \"23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:09:45.125156 containerd[1661]: time="2026-01-14T01:09:45.123183982Z" level=info msg="Container 0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:09:45.169279 containerd[1661]: time="2026-01-14T01:09:45.169200429Z" level=info msg="CreateContainer within sandbox \"23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e\"" Jan 14 01:09:45.175344 containerd[1661]: time="2026-01-14T01:09:45.174135385Z" level=info msg="StartContainer for \"0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e\"" Jan 14 01:09:45.183516 containerd[1661]: time="2026-01-14T01:09:45.183325967Z" level=info msg="connecting to shim 0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e" address="unix:///run/containerd/s/8468593821133bdf76f85428454cd58917bdcebdd39eea30032871f2f0bbdc6b" protocol=ttrpc version=3 Jan 14 01:09:45.334490 systemd[1]: Started cri-containerd-0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e.scope - libcontainer container 0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e. Jan 14 01:09:45.484059 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 01:09:45.484214 kernel: audit: type=1334 audit(1768352985.471:585): prog-id=172 op=LOAD Jan 14 01:09:45.471000 audit: BPF prog-id=172 op=LOAD Jan 14 01:09:45.471000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.534545 kernel: audit: type=1300 audit(1768352985.471:585): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.472000 audit: BPF prog-id=173 op=LOAD Jan 14 01:09:45.590907 kernel: audit: type=1327 audit(1768352985.471:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.591019 kernel: audit: type=1334 audit(1768352985.472:586): prog-id=173 op=LOAD Jan 14 01:09:45.591072 kernel: audit: type=1300 audit(1768352985.472:586): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.472000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.633060 kernel: audit: type=1327 audit(1768352985.472:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.688400 kernel: audit: type=1334 audit(1768352985.472:587): prog-id=173 op=UNLOAD Jan 14 01:09:45.472000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:09:45.472000 audit[3716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.744589 kernel: audit: type=1300 audit(1768352985.472:587): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.745274 kernel: audit: type=1327 audit(1768352985.472:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.764327 containerd[1661]: time="2026-01-14T01:09:45.764272021Z" level=info msg="StartContainer for \"0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e\" returns successfully" Jan 14 01:09:45.791126 kernel: audit: type=1334 audit(1768352985.472:588): prog-id=172 op=UNLOAD Jan 14 01:09:45.472000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:09:45.472000 audit[3716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.472000 audit: BPF prog-id=174 op=LOAD Jan 14 01:09:45.472000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3509 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:45.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036363733323261373965316263626434393139306133393766346235 Jan 14 01:09:45.819087 kubelet[2949]: E0114 01:09:45.819022 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:46.223322 kubelet[2949]: E0114 01:09:46.222364 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:46.825985 kubelet[2949]: E0114 01:09:46.825445 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:47.698200 systemd[1]: cri-containerd-0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e.scope: Deactivated successfully. Jan 14 01:09:47.700947 systemd[1]: cri-containerd-0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e.scope: Consumed 1.801s CPU time, 181M memory peak, 4.1M read from disk, 171.3M written to disk. Jan 14 01:09:47.702441 containerd[1661]: time="2026-01-14T01:09:47.702394827Z" level=info msg="received container exit event container_id:\"0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e\" id:\"0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e\" pid:3728 exited_at:{seconds:1768352987 nanos:700013719}" Jan 14 01:09:47.715000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:09:47.863277 kubelet[2949]: I0114 01:09:47.862997 2949 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:09:47.885380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0667322a79e1bcbd49190a397f4b514069f27e2b2160aa6ebd8ad9f11808e27e-rootfs.mount: Deactivated successfully. Jan 14 01:09:48.083381 kubelet[2949]: I0114 01:09:48.083316 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf2nw\" (UniqueName: \"kubernetes.io/projected/6b7ab4e1-8df7-452b-9e94-dfd2290c9d55-kube-api-access-sf2nw\") pod \"coredns-674b8bbfcf-pvf55\" (UID: \"6b7ab4e1-8df7-452b-9e94-dfd2290c9d55\") " pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:09:48.083381 kubelet[2949]: I0114 01:09:48.083359 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e-config-volume\") pod \"coredns-674b8bbfcf-mwb9m\" (UID: \"c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e\") " pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:09:48.084064 kubelet[2949]: I0114 01:09:48.083394 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc95t\" (UniqueName: \"kubernetes.io/projected/c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e-kube-api-access-fc95t\") pod \"coredns-674b8bbfcf-mwb9m\" (UID: \"c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e\") " pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:09:48.084064 kubelet[2949]: I0114 01:09:48.083422 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b7ab4e1-8df7-452b-9e94-dfd2290c9d55-config-volume\") pod \"coredns-674b8bbfcf-pvf55\" (UID: \"6b7ab4e1-8df7-452b-9e94-dfd2290c9d55\") " pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:09:48.084064 kubelet[2949]: I0114 01:09:48.083451 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73c10481-1af3-4a40-9a8f-b16adcb34162-calico-apiserver-certs\") pod \"calico-apiserver-57c9c7ff47-ct9w8\" (UID: \"73c10481-1af3-4a40-9a8f-b16adcb34162\") " pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:09:48.084064 kubelet[2949]: I0114 01:09:48.083474 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzzh9\" (UniqueName: \"kubernetes.io/projected/73c10481-1af3-4a40-9a8f-b16adcb34162-kube-api-access-kzzh9\") pod \"calico-apiserver-57c9c7ff47-ct9w8\" (UID: \"73c10481-1af3-4a40-9a8f-b16adcb34162\") " pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:09:48.119313 systemd[1]: Created slice kubepods-burstable-podc0d00d41_fb7b_4bcd_a0ae_5d87830fc77e.slice - libcontainer container kubepods-burstable-podc0d00d41_fb7b_4bcd_a0ae_5d87830fc77e.slice. Jan 14 01:09:48.191615 systemd[1]: Created slice kubepods-burstable-pod6b7ab4e1_8df7_452b_9e94_dfd2290c9d55.slice - libcontainer container kubepods-burstable-pod6b7ab4e1_8df7_452b_9e94_dfd2290c9d55.slice. Jan 14 01:09:48.276487 systemd[1]: Created slice kubepods-besteffort-pod73c10481_1af3_4a40_9a8f_b16adcb34162.slice - libcontainer container kubepods-besteffort-pod73c10481_1af3_4a40_9a8f_b16adcb34162.slice. Jan 14 01:09:48.296507 kubelet[2949]: I0114 01:09:48.288433 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be5353a-35d3-4a4f-8ef3-74707ad90bb4-config\") pod \"goldmane-666569f655-mrnrg\" (UID: \"0be5353a-35d3-4a4f-8ef3-74707ad90bb4\") " pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:09:48.296507 kubelet[2949]: I0114 01:09:48.288484 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2vt\" (UniqueName: \"kubernetes.io/projected/6aa9117b-386d-4c03-8126-035c7bae8bf4-kube-api-access-vt2vt\") pod \"whisker-5d97df889-zplfq\" (UID: \"6aa9117b-386d-4c03-8126-035c7bae8bf4\") " pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:09:48.296507 kubelet[2949]: I0114 01:09:48.288515 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-ca-bundle\") pod \"whisker-5d97df889-zplfq\" (UID: \"6aa9117b-386d-4c03-8126-035c7bae8bf4\") " pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:09:48.296507 kubelet[2949]: I0114 01:09:48.288542 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7ksr\" (UniqueName: \"kubernetes.io/projected/4210e14f-14d6-426e-8696-17d6edfc7412-kube-api-access-m7ksr\") pod \"calico-kube-controllers-cd8889796-8dksn\" (UID: \"4210e14f-14d6-426e-8696-17d6edfc7412\") " pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:09:48.296507 kubelet[2949]: I0114 01:09:48.288564 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0be5353a-35d3-4a4f-8ef3-74707ad90bb4-goldmane-key-pair\") pod \"goldmane-666569f655-mrnrg\" (UID: \"0be5353a-35d3-4a4f-8ef3-74707ad90bb4\") " pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:09:48.297333 kubelet[2949]: I0114 01:09:48.288584 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4210e14f-14d6-426e-8696-17d6edfc7412-tigera-ca-bundle\") pod \"calico-kube-controllers-cd8889796-8dksn\" (UID: \"4210e14f-14d6-426e-8696-17d6edfc7412\") " pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:09:48.297333 kubelet[2949]: I0114 01:09:48.288610 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcp97\" (UniqueName: \"kubernetes.io/projected/0be5353a-35d3-4a4f-8ef3-74707ad90bb4-kube-api-access-xcp97\") pod \"goldmane-666569f655-mrnrg\" (UID: \"0be5353a-35d3-4a4f-8ef3-74707ad90bb4\") " pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:09:48.297333 kubelet[2949]: I0114 01:09:48.293027 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be5353a-35d3-4a4f-8ef3-74707ad90bb4-goldmane-ca-bundle\") pod \"goldmane-666569f655-mrnrg\" (UID: \"0be5353a-35d3-4a4f-8ef3-74707ad90bb4\") " pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:09:48.297333 kubelet[2949]: I0114 01:09:48.294286 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-backend-key-pair\") pod \"whisker-5d97df889-zplfq\" (UID: \"6aa9117b-386d-4c03-8126-035c7bae8bf4\") " pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:09:48.297333 kubelet[2949]: I0114 01:09:48.294340 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7d0a51e-3dc4-4308-8f17-61e1305f307f-calico-apiserver-certs\") pod \"calico-apiserver-57c9c7ff47-drdgg\" (UID: \"e7d0a51e-3dc4-4308-8f17-61e1305f307f\") " pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:09:48.297544 kubelet[2949]: I0114 01:09:48.294362 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjqf2\" (UniqueName: \"kubernetes.io/projected/e7d0a51e-3dc4-4308-8f17-61e1305f307f-kube-api-access-vjqf2\") pod \"calico-apiserver-57c9c7ff47-drdgg\" (UID: \"e7d0a51e-3dc4-4308-8f17-61e1305f307f\") " pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:09:48.297625 systemd[1]: Created slice kubepods-besteffort-pod6aa9117b_386d_4c03_8126_035c7bae8bf4.slice - libcontainer container kubepods-besteffort-pod6aa9117b_386d_4c03_8126_035c7bae8bf4.slice. Jan 14 01:09:48.336218 systemd[1]: Created slice kubepods-besteffort-pode7d0a51e_3dc4_4308_8f17_61e1305f307f.slice - libcontainer container kubepods-besteffort-pode7d0a51e_3dc4_4308_8f17_61e1305f307f.slice. Jan 14 01:09:48.364047 systemd[1]: Created slice kubepods-besteffort-pod0be5353a_35d3_4a4f_8ef3_74707ad90bb4.slice - libcontainer container kubepods-besteffort-pod0be5353a_35d3_4a4f_8ef3_74707ad90bb4.slice. Jan 14 01:09:48.383504 systemd[1]: Created slice kubepods-besteffort-pod4210e14f_14d6_426e_8696_17d6edfc7412.slice - libcontainer container kubepods-besteffort-pod4210e14f_14d6_426e_8696_17d6edfc7412.slice. Jan 14 01:09:48.409624 systemd[1]: Created slice kubepods-besteffort-pod1036b5d9_9d65_4e70_adc3_802295ee7a1e.slice - libcontainer container kubepods-besteffort-pod1036b5d9_9d65_4e70_adc3_802295ee7a1e.slice. Jan 14 01:09:48.449983 containerd[1661]: time="2026-01-14T01:09:48.449610663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:48.463339 kubelet[2949]: E0114 01:09:48.463306 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:48.520521 kubelet[2949]: E0114 01:09:48.520484 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:48.564097 containerd[1661]: time="2026-01-14T01:09:48.480617476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,}" Jan 14 01:09:48.564246 containerd[1661]: time="2026-01-14T01:09:48.522477784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,}" Jan 14 01:09:48.586579 containerd[1661]: time="2026-01-14T01:09:48.586216148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:09:48.652499 containerd[1661]: time="2026-01-14T01:09:48.650305561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:48.679448 containerd[1661]: time="2026-01-14T01:09:48.679379036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:48.704458 containerd[1661]: time="2026-01-14T01:09:48.704180540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:09:48.715163 containerd[1661]: time="2026-01-14T01:09:48.714503223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:49.073617 kubelet[2949]: E0114 01:09:49.072515 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:49.098518 containerd[1661]: time="2026-01-14T01:09:49.096498882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:09:49.219447 kubelet[2949]: E0114 01:09:49.219405 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:49.419618 containerd[1661]: time="2026-01-14T01:09:49.419160234Z" level=error msg="Failed to destroy network for sandbox \"d09b53556eee7eb0c254180437563a8515221f5aec19d61ccd068dc9b03b7784\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.429106 systemd[1]: run-netns-cni\x2d96608320\x2d6089\x2df9b2\x2dbc77\x2d538f5d8f45e6.mount: Deactivated successfully. Jan 14 01:09:49.471002 containerd[1661]: time="2026-01-14T01:09:49.470232483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09b53556eee7eb0c254180437563a8515221f5aec19d61ccd068dc9b03b7784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.471339 kubelet[2949]: E0114 01:09:49.470556 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09b53556eee7eb0c254180437563a8515221f5aec19d61ccd068dc9b03b7784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.471339 kubelet[2949]: E0114 01:09:49.470616 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09b53556eee7eb0c254180437563a8515221f5aec19d61ccd068dc9b03b7784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:09:49.471339 kubelet[2949]: E0114 01:09:49.470967 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09b53556eee7eb0c254180437563a8515221f5aec19d61ccd068dc9b03b7784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:09:49.472211 kubelet[2949]: E0114 01:09:49.471968 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d09b53556eee7eb0c254180437563a8515221f5aec19d61ccd068dc9b03b7784\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:09:49.641287 containerd[1661]: time="2026-01-14T01:09:49.641085799Z" level=error msg="Failed to destroy network for sandbox \"895d662bacbad46bbab9b9ba5dc01f2cdcb0322aa8c82b6c8c886f3409a217e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.684182 containerd[1661]: time="2026-01-14T01:09:49.681085664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"895d662bacbad46bbab9b9ba5dc01f2cdcb0322aa8c82b6c8c886f3409a217e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.684182 containerd[1661]: time="2026-01-14T01:09:49.682391410Z" level=error msg="Failed to destroy network for sandbox \"d751f0f4934692682611b1eb1940189185f1a2d1ad0ae3892862201cd977a843\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.688559 kubelet[2949]: E0114 01:09:49.687336 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"895d662bacbad46bbab9b9ba5dc01f2cdcb0322aa8c82b6c8c886f3409a217e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.688559 kubelet[2949]: E0114 01:09:49.687508 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"895d662bacbad46bbab9b9ba5dc01f2cdcb0322aa8c82b6c8c886f3409a217e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:09:49.688559 kubelet[2949]: E0114 01:09:49.687535 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"895d662bacbad46bbab9b9ba5dc01f2cdcb0322aa8c82b6c8c886f3409a217e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:09:49.692606 kubelet[2949]: E0114 01:09:49.687585 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"895d662bacbad46bbab9b9ba5dc01f2cdcb0322aa8c82b6c8c886f3409a217e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mwb9m" podUID="c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e" Jan 14 01:09:49.699505 containerd[1661]: time="2026-01-14T01:09:49.699386186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d751f0f4934692682611b1eb1940189185f1a2d1ad0ae3892862201cd977a843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.700284 kubelet[2949]: E0114 01:09:49.700177 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d751f0f4934692682611b1eb1940189185f1a2d1ad0ae3892862201cd977a843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.700284 kubelet[2949]: E0114 01:09:49.700226 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d751f0f4934692682611b1eb1940189185f1a2d1ad0ae3892862201cd977a843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:09:49.700284 kubelet[2949]: E0114 01:09:49.700255 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d751f0f4934692682611b1eb1940189185f1a2d1ad0ae3892862201cd977a843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:09:49.701027 kubelet[2949]: E0114 01:09:49.700315 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d751f0f4934692682611b1eb1940189185f1a2d1ad0ae3892862201cd977a843\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:09:49.771370 containerd[1661]: time="2026-01-14T01:09:49.769160595Z" level=error msg="Failed to destroy network for sandbox \"4520665eff5e22d2b2f0a40846d205cef0c276ec57979e98fe8c560d219d2552\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.790292 containerd[1661]: time="2026-01-14T01:09:49.788310501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4520665eff5e22d2b2f0a40846d205cef0c276ec57979e98fe8c560d219d2552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.790620 kubelet[2949]: E0114 01:09:49.790300 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4520665eff5e22d2b2f0a40846d205cef0c276ec57979e98fe8c560d219d2552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.790620 kubelet[2949]: E0114 01:09:49.790377 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4520665eff5e22d2b2f0a40846d205cef0c276ec57979e98fe8c560d219d2552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:09:49.790620 kubelet[2949]: E0114 01:09:49.790410 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4520665eff5e22d2b2f0a40846d205cef0c276ec57979e98fe8c560d219d2552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:09:49.791185 kubelet[2949]: E0114 01:09:49.790467 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4520665eff5e22d2b2f0a40846d205cef0c276ec57979e98fe8c560d219d2552\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pvf55" podUID="6b7ab4e1-8df7-452b-9e94-dfd2290c9d55" Jan 14 01:09:49.826320 containerd[1661]: time="2026-01-14T01:09:49.826097226Z" level=error msg="Failed to destroy network for sandbox \"88e86cfb146baf4aafe7136803380de6f766eb28bc6ba5c967f242fda74ffc8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.842316 containerd[1661]: time="2026-01-14T01:09:49.842255819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e86cfb146baf4aafe7136803380de6f766eb28bc6ba5c967f242fda74ffc8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.847066 kubelet[2949]: E0114 01:09:49.844064 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e86cfb146baf4aafe7136803380de6f766eb28bc6ba5c967f242fda74ffc8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.847066 kubelet[2949]: E0114 01:09:49.844123 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e86cfb146baf4aafe7136803380de6f766eb28bc6ba5c967f242fda74ffc8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:09:49.847066 kubelet[2949]: E0114 01:09:49.844152 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e86cfb146baf4aafe7136803380de6f766eb28bc6ba5c967f242fda74ffc8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:09:49.848150 kubelet[2949]: E0114 01:09:49.844208 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88e86cfb146baf4aafe7136803380de6f766eb28bc6ba5c967f242fda74ffc8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:09:49.866167 containerd[1661]: time="2026-01-14T01:09:49.865533771Z" level=error msg="Failed to destroy network for sandbox \"803566a7e930024bf4e3cf05394e7a87c7063ff88f278d760a90f63162765326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.886453 systemd[1]: run-netns-cni\x2df80a2a01\x2d8ed0\x2d2d12\x2d60c7\x2d907fc1928bb1.mount: Deactivated successfully. Jan 14 01:09:49.886608 systemd[1]: run-netns-cni\x2dd98b79c9\x2d4ecd\x2d1a4b\x2d3ece\x2d7a40e89eb31b.mount: Deactivated successfully. Jan 14 01:09:49.892344 containerd[1661]: time="2026-01-14T01:09:49.892187389Z" level=error msg="Failed to destroy network for sandbox \"2b6409cbef77eba1ae368591e7a6794d8523d0fa8ed52c66b2dda1a510f598d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.897037 systemd[1]: run-netns-cni\x2d49e371f4\x2dcb71\x2d28e8\x2dd45f\x2dc05dd19e0c5d.mount: Deactivated successfully. Jan 14 01:09:49.899127 systemd[1]: run-netns-cni\x2d12683e49\x2d538f\x2d8430\x2d531d\x2dee9907042654.mount: Deactivated successfully. Jan 14 01:09:49.899239 systemd[1]: run-netns-cni\x2ddbac55de\x2d1d83\x2d0bcc\x2d4a6f\x2d95a8f7400c6e.mount: Deactivated successfully. Jan 14 01:09:49.919984 systemd[1]: run-netns-cni\x2d780c5b2d\x2d7517\x2d3a64\x2db85c\x2d11945555490f.mount: Deactivated successfully. Jan 14 01:09:49.930361 containerd[1661]: time="2026-01-14T01:09:49.927503641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"803566a7e930024bf4e3cf05394e7a87c7063ff88f278d760a90f63162765326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.953591 kubelet[2949]: E0114 01:09:49.946217 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803566a7e930024bf4e3cf05394e7a87c7063ff88f278d760a90f63162765326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.953591 kubelet[2949]: E0114 01:09:49.946611 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803566a7e930024bf4e3cf05394e7a87c7063ff88f278d760a90f63162765326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:09:49.953591 kubelet[2949]: E0114 01:09:49.946955 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803566a7e930024bf4e3cf05394e7a87c7063ff88f278d760a90f63162765326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:09:49.961263 kubelet[2949]: E0114 01:09:49.947018 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"803566a7e930024bf4e3cf05394e7a87c7063ff88f278d760a90f63162765326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:09:49.969160 containerd[1661]: time="2026-01-14T01:09:49.969096690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6409cbef77eba1ae368591e7a6794d8523d0fa8ed52c66b2dda1a510f598d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.976493 kubelet[2949]: E0114 01:09:49.976437 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6409cbef77eba1ae368591e7a6794d8523d0fa8ed52c66b2dda1a510f598d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:49.977076 kubelet[2949]: E0114 01:09:49.977037 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6409cbef77eba1ae368591e7a6794d8523d0fa8ed52c66b2dda1a510f598d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:09:49.977194 kubelet[2949]: E0114 01:09:49.977167 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6409cbef77eba1ae368591e7a6794d8523d0fa8ed52c66b2dda1a510f598d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:09:49.977376 kubelet[2949]: E0114 01:09:49.977336 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b6409cbef77eba1ae368591e7a6794d8523d0fa8ed52c66b2dda1a510f598d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d97df889-zplfq" podUID="6aa9117b-386d-4c03-8126-035c7bae8bf4" Jan 14 01:09:50.118187 containerd[1661]: time="2026-01-14T01:09:50.118130264Z" level=error msg="Failed to destroy network for sandbox \"89b34fc0c3a9f2483a40b9056abe66517c13f490b1a21173cb2f1e6e52fc6c63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:50.129417 systemd[1]: run-netns-cni\x2d80c21b04\x2d9cd6\x2da65c\x2d00e1\x2d2c06327abb76.mount: Deactivated successfully. Jan 14 01:09:50.155215 containerd[1661]: time="2026-01-14T01:09:50.155046325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b34fc0c3a9f2483a40b9056abe66517c13f490b1a21173cb2f1e6e52fc6c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:50.158089 kubelet[2949]: E0114 01:09:50.157149 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b34fc0c3a9f2483a40b9056abe66517c13f490b1a21173cb2f1e6e52fc6c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:50.158089 kubelet[2949]: E0114 01:09:50.157395 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b34fc0c3a9f2483a40b9056abe66517c13f490b1a21173cb2f1e6e52fc6c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:09:50.158089 kubelet[2949]: E0114 01:09:50.157430 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b34fc0c3a9f2483a40b9056abe66517c13f490b1a21173cb2f1e6e52fc6c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:09:50.159308 kubelet[2949]: E0114 01:09:50.157617 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89b34fc0c3a9f2483a40b9056abe66517c13f490b1a21173cb2f1e6e52fc6c63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:09:55.221263 kubelet[2949]: E0114 01:09:55.218287 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:55.221263 kubelet[2949]: E0114 01:09:55.220325 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:00.224394 kubelet[2949]: E0114 01:10:00.224354 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:00.247196 containerd[1661]: time="2026-01-14T01:10:00.246204702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:00.711541 containerd[1661]: time="2026-01-14T01:10:00.711181791Z" level=error msg="Failed to destroy network for sandbox \"e9ef4a034f8895326f95dd4038084ac25728ec8eff1a609fd31c0329f01344ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:00.726091 systemd[1]: run-netns-cni\x2d5ff27275\x2d5dc3\x2dc281\x2da743\x2d2eff367248d0.mount: Deactivated successfully. Jan 14 01:10:00.748999 containerd[1661]: time="2026-01-14T01:10:00.745435599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ef4a034f8895326f95dd4038084ac25728ec8eff1a609fd31c0329f01344ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:00.750572 kubelet[2949]: E0114 01:10:00.750521 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ef4a034f8895326f95dd4038084ac25728ec8eff1a609fd31c0329f01344ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:00.751552 kubelet[2949]: E0114 01:10:00.751092 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ef4a034f8895326f95dd4038084ac25728ec8eff1a609fd31c0329f01344ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:10:00.751552 kubelet[2949]: E0114 01:10:00.751127 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ef4a034f8895326f95dd4038084ac25728ec8eff1a609fd31c0329f01344ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:10:00.751552 kubelet[2949]: E0114 01:10:00.751197 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9ef4a034f8895326f95dd4038084ac25728ec8eff1a609fd31c0329f01344ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mwb9m" podUID="c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e" Jan 14 01:10:01.230030 kubelet[2949]: E0114 01:10:01.228612 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:01.234049 containerd[1661]: time="2026-01-14T01:10:01.232525919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:01.859001 containerd[1661]: time="2026-01-14T01:10:01.856540463Z" level=error msg="Failed to destroy network for sandbox \"349bcf21f66c8f00d0aacbd77ff34150992c45d0ece66cc4d1ad4faf2451adb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:01.875142 systemd[1]: run-netns-cni\x2dd7e567c5\x2d3ac8\x2dde13\x2d52a5\x2df36ac9c2f07e.mount: Deactivated successfully. Jan 14 01:10:01.929349 containerd[1661]: time="2026-01-14T01:10:01.929280938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"349bcf21f66c8f00d0aacbd77ff34150992c45d0ece66cc4d1ad4faf2451adb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:01.935590 kubelet[2949]: E0114 01:10:01.930613 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"349bcf21f66c8f00d0aacbd77ff34150992c45d0ece66cc4d1ad4faf2451adb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:01.936191 kubelet[2949]: E0114 01:10:01.936162 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"349bcf21f66c8f00d0aacbd77ff34150992c45d0ece66cc4d1ad4faf2451adb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:10:01.936344 kubelet[2949]: E0114 01:10:01.936320 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"349bcf21f66c8f00d0aacbd77ff34150992c45d0ece66cc4d1ad4faf2451adb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:10:01.936499 kubelet[2949]: E0114 01:10:01.936464 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"349bcf21f66c8f00d0aacbd77ff34150992c45d0ece66cc4d1ad4faf2451adb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pvf55" podUID="6b7ab4e1-8df7-452b-9e94-dfd2290c9d55" Jan 14 01:10:02.238026 containerd[1661]: time="2026-01-14T01:10:02.237472383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:03.122224 containerd[1661]: time="2026-01-14T01:10:03.117306094Z" level=error msg="Failed to destroy network for sandbox \"878be0f8b78c805e6f4f7438f815ce32a5faa06cc8e60051ae60608736521681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:03.136476 systemd[1]: run-netns-cni\x2da5446ae0\x2de875\x2d5383\x2dc1b7\x2df5887aa5cd9f.mount: Deactivated successfully. Jan 14 01:10:03.162043 containerd[1661]: time="2026-01-14T01:10:03.160428932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"878be0f8b78c805e6f4f7438f815ce32a5faa06cc8e60051ae60608736521681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:03.169101 kubelet[2949]: E0114 01:10:03.164028 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878be0f8b78c805e6f4f7438f815ce32a5faa06cc8e60051ae60608736521681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:03.169101 kubelet[2949]: E0114 01:10:03.164102 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878be0f8b78c805e6f4f7438f815ce32a5faa06cc8e60051ae60608736521681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:10:03.169101 kubelet[2949]: E0114 01:10:03.164132 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878be0f8b78c805e6f4f7438f815ce32a5faa06cc8e60051ae60608736521681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:10:03.178941 kubelet[2949]: E0114 01:10:03.164190 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"878be0f8b78c805e6f4f7438f815ce32a5faa06cc8e60051ae60608736521681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d97df889-zplfq" podUID="6aa9117b-386d-4c03-8126-035c7bae8bf4" Jan 14 01:10:03.238102 containerd[1661]: time="2026-01-14T01:10:03.237104715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:03.241414 containerd[1661]: time="2026-01-14T01:10:03.238482407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:04.142327 containerd[1661]: time="2026-01-14T01:10:04.137432116Z" level=error msg="Failed to destroy network for sandbox \"e8c883548ce276496049587390e683f3f4a46c532c9685495991e01277245aef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:04.150435 systemd[1]: run-netns-cni\x2d1a6e4b17\x2daa6e\x2dc92e\x2d595a\x2d4d0c17cb1a1c.mount: Deactivated successfully. Jan 14 01:10:04.160362 containerd[1661]: time="2026-01-14T01:10:04.157089410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c883548ce276496049587390e683f3f4a46c532c9685495991e01277245aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:04.170561 kubelet[2949]: E0114 01:10:04.170402 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c883548ce276496049587390e683f3f4a46c532c9685495991e01277245aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:04.172355 kubelet[2949]: E0114 01:10:04.170585 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c883548ce276496049587390e683f3f4a46c532c9685495991e01277245aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:10:04.172355 kubelet[2949]: E0114 01:10:04.170617 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c883548ce276496049587390e683f3f4a46c532c9685495991e01277245aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:10:04.176613 kubelet[2949]: E0114 01:10:04.173484 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8c883548ce276496049587390e683f3f4a46c532c9685495991e01277245aef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:10:04.244191 containerd[1661]: time="2026-01-14T01:10:04.243163828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:04.247178 containerd[1661]: time="2026-01-14T01:10:04.247140402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:04.531139 containerd[1661]: time="2026-01-14T01:10:04.528539842Z" level=error msg="Failed to destroy network for sandbox \"10cb678282e621f83d7f1da67850a495b175bcf8eda8165c8a3665c9338b7a57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:04.792087 systemd[1]: run-netns-cni\x2dca5c3244\x2d22ad\x2dc89d\x2d9bce\x2d3518d4725b08.mount: Deactivated successfully. Jan 14 01:10:04.821562 containerd[1661]: time="2026-01-14T01:10:04.817271003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb678282e621f83d7f1da67850a495b175bcf8eda8165c8a3665c9338b7a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:04.830594 kubelet[2949]: E0114 01:10:04.830348 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb678282e621f83d7f1da67850a495b175bcf8eda8165c8a3665c9338b7a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:04.830594 kubelet[2949]: E0114 01:10:04.830426 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb678282e621f83d7f1da67850a495b175bcf8eda8165c8a3665c9338b7a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:10:04.830594 kubelet[2949]: E0114 01:10:04.830453 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb678282e621f83d7f1da67850a495b175bcf8eda8165c8a3665c9338b7a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:10:04.833580 kubelet[2949]: E0114 01:10:04.831579 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10cb678282e621f83d7f1da67850a495b175bcf8eda8165c8a3665c9338b7a57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:10:05.123295 containerd[1661]: time="2026-01-14T01:10:05.116086318Z" level=error msg="Failed to destroy network for sandbox \"0bbb8de7343080c6e7606dfdc0839fb19e061e06d5e5539448019abcc4e1ad23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.146578 systemd[1]: run-netns-cni\x2db3cb38f2\x2d8fa6\x2d9cdd\x2de663\x2da9bc84284110.mount: Deactivated successfully. Jan 14 01:10:05.180225 containerd[1661]: time="2026-01-14T01:10:05.179842779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb8de7343080c6e7606dfdc0839fb19e061e06d5e5539448019abcc4e1ad23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.184616 kubelet[2949]: E0114 01:10:05.180192 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb8de7343080c6e7606dfdc0839fb19e061e06d5e5539448019abcc4e1ad23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.184616 kubelet[2949]: E0114 01:10:05.180258 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb8de7343080c6e7606dfdc0839fb19e061e06d5e5539448019abcc4e1ad23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:10:05.184616 kubelet[2949]: E0114 01:10:05.180285 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb8de7343080c6e7606dfdc0839fb19e061e06d5e5539448019abcc4e1ad23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:10:05.194214 kubelet[2949]: E0114 01:10:05.180468 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bbb8de7343080c6e7606dfdc0839fb19e061e06d5e5539448019abcc4e1ad23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:10:05.227280 containerd[1661]: time="2026-01-14T01:10:05.224455361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:05.253135 containerd[1661]: time="2026-01-14T01:10:05.253082404Z" level=error msg="Failed to destroy network for sandbox \"ddbbce132b33ea8c5c567e1efb21dfacc73405f636aa8f74c92600c81841b3f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.280160 systemd[1]: run-netns-cni\x2d98240306\x2d31bd\x2da3a9\x2d479b\x2d3d27c15ed924.mount: Deactivated successfully. Jan 14 01:10:05.345070 containerd[1661]: time="2026-01-14T01:10:05.344106831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddbbce132b33ea8c5c567e1efb21dfacc73405f636aa8f74c92600c81841b3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.354034 kubelet[2949]: E0114 01:10:05.353101 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddbbce132b33ea8c5c567e1efb21dfacc73405f636aa8f74c92600c81841b3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.354034 kubelet[2949]: E0114 01:10:05.353449 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddbbce132b33ea8c5c567e1efb21dfacc73405f636aa8f74c92600c81841b3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:10:05.354034 kubelet[2949]: E0114 01:10:05.353477 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddbbce132b33ea8c5c567e1efb21dfacc73405f636aa8f74c92600c81841b3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:10:05.354474 kubelet[2949]: E0114 01:10:05.353531 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddbbce132b33ea8c5c567e1efb21dfacc73405f636aa8f74c92600c81841b3f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:10:05.833566 containerd[1661]: time="2026-01-14T01:10:05.833509204Z" level=error msg="Failed to destroy network for sandbox \"a09c8990425258e3b60a18451c752cc27a1ea40e70d9d18313002a594e0ccb5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.853549 systemd[1]: run-netns-cni\x2dc047ea71\x2d7873\x2d63f0\x2d1dc1\x2db55000aa1723.mount: Deactivated successfully. Jan 14 01:10:05.909447 containerd[1661]: time="2026-01-14T01:10:05.899252672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09c8990425258e3b60a18451c752cc27a1ea40e70d9d18313002a594e0ccb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.910145 kubelet[2949]: E0114 01:10:05.901419 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09c8990425258e3b60a18451c752cc27a1ea40e70d9d18313002a594e0ccb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:05.910145 kubelet[2949]: E0114 01:10:05.901501 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09c8990425258e3b60a18451c752cc27a1ea40e70d9d18313002a594e0ccb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:10:05.910145 kubelet[2949]: E0114 01:10:05.901538 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09c8990425258e3b60a18451c752cc27a1ea40e70d9d18313002a594e0ccb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:10:05.910368 kubelet[2949]: E0114 01:10:05.901607 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a09c8990425258e3b60a18451c752cc27a1ea40e70d9d18313002a594e0ccb5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:10:12.221539 kubelet[2949]: E0114 01:10:12.221466 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:12.229252 containerd[1661]: time="2026-01-14T01:10:12.228509827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:12.757302 containerd[1661]: time="2026-01-14T01:10:12.752618273Z" level=error msg="Failed to destroy network for sandbox \"2a86f7b6f49e9808d3df26af7111e10f69c4a24f71b44a5f80fb60119122b707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:12.771137 systemd[1]: run-netns-cni\x2d68e88c14\x2dcd15\x2d2d8b\x2d4ab2\x2d6cdc08b83439.mount: Deactivated successfully. Jan 14 01:10:12.828976 containerd[1661]: time="2026-01-14T01:10:12.825453760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a86f7b6f49e9808d3df26af7111e10f69c4a24f71b44a5f80fb60119122b707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:12.829366 kubelet[2949]: E0114 01:10:12.828095 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a86f7b6f49e9808d3df26af7111e10f69c4a24f71b44a5f80fb60119122b707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:12.829366 kubelet[2949]: E0114 01:10:12.828169 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a86f7b6f49e9808d3df26af7111e10f69c4a24f71b44a5f80fb60119122b707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:10:12.829366 kubelet[2949]: E0114 01:10:12.828204 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a86f7b6f49e9808d3df26af7111e10f69c4a24f71b44a5f80fb60119122b707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:10:12.829531 kubelet[2949]: E0114 01:10:12.828265 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a86f7b6f49e9808d3df26af7111e10f69c4a24f71b44a5f80fb60119122b707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mwb9m" podUID="c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e" Jan 14 01:10:14.225305 kubelet[2949]: E0114 01:10:14.224562 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:14.232335 containerd[1661]: time="2026-01-14T01:10:14.228611394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:14.741944 containerd[1661]: time="2026-01-14T01:10:14.741884220Z" level=error msg="Failed to destroy network for sandbox \"4e1c8839beb0461258ef954d1459b15cf3c54819bf042b3591fe27df87f8dcda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:14.753470 systemd[1]: run-netns-cni\x2d33084a04\x2d0433\x2d96d8\x2da5d4\x2dceec305e27c3.mount: Deactivated successfully. Jan 14 01:10:14.809349 containerd[1661]: time="2026-01-14T01:10:14.807461781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1c8839beb0461258ef954d1459b15cf3c54819bf042b3591fe27df87f8dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:14.812470 kubelet[2949]: E0114 01:10:14.808864 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1c8839beb0461258ef954d1459b15cf3c54819bf042b3591fe27df87f8dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:14.812470 kubelet[2949]: E0114 01:10:14.808918 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1c8839beb0461258ef954d1459b15cf3c54819bf042b3591fe27df87f8dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:10:14.812470 kubelet[2949]: E0114 01:10:14.808941 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1c8839beb0461258ef954d1459b15cf3c54819bf042b3591fe27df87f8dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:10:14.813097 kubelet[2949]: E0114 01:10:14.809005 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e1c8839beb0461258ef954d1459b15cf3c54819bf042b3591fe27df87f8dcda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pvf55" podUID="6b7ab4e1-8df7-452b-9e94-dfd2290c9d55" Jan 14 01:10:15.227520 containerd[1661]: time="2026-01-14T01:10:15.224285359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:15.724181 containerd[1661]: time="2026-01-14T01:10:15.721608495Z" level=error msg="Failed to destroy network for sandbox \"54743e13f0f73df7911ee9009d49f139d6b12a33ad32c13ac7179894be5359d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:15.733592 systemd[1]: run-netns-cni\x2deaf576f4\x2d958b\x2dd1e7\x2db411\x2def82e5abb1df.mount: Deactivated successfully. Jan 14 01:10:15.745171 containerd[1661]: time="2026-01-14T01:10:15.744230495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54743e13f0f73df7911ee9009d49f139d6b12a33ad32c13ac7179894be5359d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:15.751169 kubelet[2949]: E0114 01:10:15.750923 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54743e13f0f73df7911ee9009d49f139d6b12a33ad32c13ac7179894be5359d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:15.751169 kubelet[2949]: E0114 01:10:15.750997 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54743e13f0f73df7911ee9009d49f139d6b12a33ad32c13ac7179894be5359d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:10:15.751169 kubelet[2949]: E0114 01:10:15.751030 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54743e13f0f73df7911ee9009d49f139d6b12a33ad32c13ac7179894be5359d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:10:15.752380 kubelet[2949]: E0114 01:10:15.751084 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54743e13f0f73df7911ee9009d49f139d6b12a33ad32c13ac7179894be5359d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d97df889-zplfq" podUID="6aa9117b-386d-4c03-8126-035c7bae8bf4" Jan 14 01:10:15.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.95:22-10.0.0.1:34040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.904273 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:34040.service - OpenSSH per-connection server daemon (10.0.0.1:34040). Jan 14 01:10:15.954941 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:10:15.955070 kernel: audit: type=1130 audit(1768353015.903:591): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.95:22-10.0.0.1:34040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.233136 containerd[1661]: time="2026-01-14T01:10:16.231254970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:16.244905 containerd[1661]: time="2026-01-14T01:10:16.242089089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:16.380000 audit[4387]: USER_ACCT pid=4387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.383610 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 34040 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:16.391218 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:16.426057 kernel: audit: type=1101 audit(1768353016.380:592): pid=4387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.421482 systemd-logind[1635]: New session 9 of user core. Jan 14 01:10:16.386000 audit[4387]: CRED_ACQ pid=4387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.510981 kernel: audit: type=1103 audit(1768353016.386:593): pid=4387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.512210 kernel: audit: type=1006 audit(1768353016.386:594): pid=4387 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 01:10:16.484902 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:10:16.386000 audit[4387]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff73372490 a2=3 a3=0 items=0 ppid=1 pid=4387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:16.570040 kernel: audit: type=1300 audit(1768353016.386:594): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff73372490 a2=3 a3=0 items=0 ppid=1 pid=4387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:16.386000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:16.503000 audit[4387]: USER_START pid=4387 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.642371 kernel: audit: type=1327 audit(1768353016.386:594): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:16.642965 kernel: audit: type=1105 audit(1768353016.503:595): pid=4387 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.516000 audit[4418]: CRED_ACQ pid=4418 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.686323 kernel: audit: type=1103 audit(1768353016.516:596): pid=4418 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:16.921216 containerd[1661]: time="2026-01-14T01:10:16.917133080Z" level=error msg="Failed to destroy network for sandbox \"497dab31265c7d3225dae922e387c8848362bbdb7a57c3c150db0473c3c47b8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:16.927434 systemd[1]: run-netns-cni\x2d2e7fc104\x2d3464\x2dc0e4\x2d7f5a\x2d710ea493ab69.mount: Deactivated successfully. Jan 14 01:10:16.947060 containerd[1661]: time="2026-01-14T01:10:16.944408373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"497dab31265c7d3225dae922e387c8848362bbdb7a57c3c150db0473c3c47b8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:16.947432 kubelet[2949]: E0114 01:10:16.945583 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"497dab31265c7d3225dae922e387c8848362bbdb7a57c3c150db0473c3c47b8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:16.947432 kubelet[2949]: E0114 01:10:16.945921 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"497dab31265c7d3225dae922e387c8848362bbdb7a57c3c150db0473c3c47b8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:10:16.947432 kubelet[2949]: E0114 01:10:16.945954 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"497dab31265c7d3225dae922e387c8848362bbdb7a57c3c150db0473c3c47b8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:10:16.948191 kubelet[2949]: E0114 01:10:16.946019 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"497dab31265c7d3225dae922e387c8848362bbdb7a57c3c150db0473c3c47b8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:10:17.015869 containerd[1661]: time="2026-01-14T01:10:17.012199527Z" level=error msg="Failed to destroy network for sandbox \"c5597ed3abd07bbe37d7076df5d4fa80a90d762ef6216fd1d4d7d9c385b1402e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:17.022218 systemd[1]: run-netns-cni\x2defb447ba\x2d527b\x2d7476\x2df81f\x2dca246dcb0f29.mount: Deactivated successfully. Jan 14 01:10:17.042175 containerd[1661]: time="2026-01-14T01:10:17.042029226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5597ed3abd07bbe37d7076df5d4fa80a90d762ef6216fd1d4d7d9c385b1402e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:17.046463 kubelet[2949]: E0114 01:10:17.046176 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5597ed3abd07bbe37d7076df5d4fa80a90d762ef6216fd1d4d7d9c385b1402e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:17.046463 kubelet[2949]: E0114 01:10:17.046248 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5597ed3abd07bbe37d7076df5d4fa80a90d762ef6216fd1d4d7d9c385b1402e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:10:17.046463 kubelet[2949]: E0114 01:10:17.046376 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5597ed3abd07bbe37d7076df5d4fa80a90d762ef6216fd1d4d7d9c385b1402e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:10:17.047164 kubelet[2949]: E0114 01:10:17.046440 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5597ed3abd07bbe37d7076df5d4fa80a90d762ef6216fd1d4d7d9c385b1402e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:10:17.172908 sshd[4418]: Connection closed by 10.0.0.1 port 34040 Jan 14 01:10:17.173622 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:17.175000 audit[4387]: USER_END pid=4387 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:17.194938 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:34040.service: Deactivated successfully. Jan 14 01:10:17.206510 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:10:17.218200 systemd-logind[1635]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:10:17.223264 systemd-logind[1635]: Removed session 9. Jan 14 01:10:17.232113 kernel: audit: type=1106 audit(1768353017.175:597): pid=4387 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:17.177000 audit[4387]: CRED_DISP pid=4387 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:17.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.95:22-10.0.0.1:34040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:17.268090 kernel: audit: type=1104 audit(1768353017.177:598): pid=4387 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:18.244305 containerd[1661]: time="2026-01-14T01:10:18.243952374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:18.730937 containerd[1661]: time="2026-01-14T01:10:18.730585190Z" level=error msg="Failed to destroy network for sandbox \"1aaed135b325f0ee09b23e6b3bb02cac9f65882ee3dff9a99ef7d5a2f732e702\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:18.740951 systemd[1]: run-netns-cni\x2df6310540\x2d3783\x2d026b\x2d5974\x2daaa961cb656f.mount: Deactivated successfully. Jan 14 01:10:18.787629 containerd[1661]: time="2026-01-14T01:10:18.787566793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaed135b325f0ee09b23e6b3bb02cac9f65882ee3dff9a99ef7d5a2f732e702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:18.797061 kubelet[2949]: E0114 01:10:18.796126 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaed135b325f0ee09b23e6b3bb02cac9f65882ee3dff9a99ef7d5a2f732e702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:18.797061 kubelet[2949]: E0114 01:10:18.796206 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaed135b325f0ee09b23e6b3bb02cac9f65882ee3dff9a99ef7d5a2f732e702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:10:18.797061 kubelet[2949]: E0114 01:10:18.796234 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaed135b325f0ee09b23e6b3bb02cac9f65882ee3dff9a99ef7d5a2f732e702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:10:18.799157 kubelet[2949]: E0114 01:10:18.796296 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1aaed135b325f0ee09b23e6b3bb02cac9f65882ee3dff9a99ef7d5a2f732e702\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:10:19.227310 containerd[1661]: time="2026-01-14T01:10:19.226981050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:19.666130 containerd[1661]: time="2026-01-14T01:10:19.665104653Z" level=error msg="Failed to destroy network for sandbox \"06604525ddc544c0db80e3a0c558c57c0f57d9d5ca78b8e40dc75d47fd2b8f04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:19.680191 systemd[1]: run-netns-cni\x2d818c55be\x2dbe22\x2d2c62\x2dadd0\x2d35153c315c16.mount: Deactivated successfully. Jan 14 01:10:19.709359 containerd[1661]: time="2026-01-14T01:10:19.709291407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06604525ddc544c0db80e3a0c558c57c0f57d9d5ca78b8e40dc75d47fd2b8f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:19.710549 kubelet[2949]: E0114 01:10:19.710502 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06604525ddc544c0db80e3a0c558c57c0f57d9d5ca78b8e40dc75d47fd2b8f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:19.711549 kubelet[2949]: E0114 01:10:19.711026 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06604525ddc544c0db80e3a0c558c57c0f57d9d5ca78b8e40dc75d47fd2b8f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:10:19.711549 kubelet[2949]: E0114 01:10:19.711068 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06604525ddc544c0db80e3a0c558c57c0f57d9d5ca78b8e40dc75d47fd2b8f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:10:19.711549 kubelet[2949]: E0114 01:10:19.711132 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06604525ddc544c0db80e3a0c558c57c0f57d9d5ca78b8e40dc75d47fd2b8f04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:10:21.222418 containerd[1661]: time="2026-01-14T01:10:21.222077895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:21.536618 containerd[1661]: time="2026-01-14T01:10:21.534527311Z" level=error msg="Failed to destroy network for sandbox \"30f53643afa5fcef1f5c161b4528de5254a29080e6bd3e038d775a0ebc483780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:21.540552 systemd[1]: run-netns-cni\x2d64cba649\x2dd0e9\x2d4efa\x2dc1dc\x2d1db0e158d226.mount: Deactivated successfully. Jan 14 01:10:21.567620 containerd[1661]: time="2026-01-14T01:10:21.566427524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f53643afa5fcef1f5c161b4528de5254a29080e6bd3e038d775a0ebc483780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:21.569195 kubelet[2949]: E0114 01:10:21.568507 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f53643afa5fcef1f5c161b4528de5254a29080e6bd3e038d775a0ebc483780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:21.569195 kubelet[2949]: E0114 01:10:21.568579 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f53643afa5fcef1f5c161b4528de5254a29080e6bd3e038d775a0ebc483780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:10:21.569195 kubelet[2949]: E0114 01:10:21.568608 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f53643afa5fcef1f5c161b4528de5254a29080e6bd3e038d775a0ebc483780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" Jan 14 01:10:21.570080 kubelet[2949]: E0114 01:10:21.568907 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30f53643afa5fcef1f5c161b4528de5254a29080e6bd3e038d775a0ebc483780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:10:22.202969 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:34056.service - OpenSSH per-connection server daemon (10.0.0.1:34056). Jan 14 01:10:22.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.95:22-10.0.0.1:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:22.218109 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:10:22.218232 kernel: audit: type=1130 audit(1768353022.202:600): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.95:22-10.0.0.1:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:22.770000 audit[4571]: USER_ACCT pid=4571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:22.773139 sshd[4571]: Accepted publickey for core from 10.0.0.1 port 34056 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:22.781125 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:22.812498 systemd-logind[1635]: New session 10 of user core. Jan 14 01:10:22.826295 kernel: audit: type=1101 audit(1768353022.770:601): pid=4571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:22.771000 audit[4571]: CRED_ACQ pid=4571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:22.883160 kernel: audit: type=1103 audit(1768353022.771:602): pid=4571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:22.883321 kernel: audit: type=1006 audit(1768353022.775:603): pid=4571 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 01:10:22.884628 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:10:22.775000 audit[4571]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe727ed0b0 a2=3 a3=0 items=0 ppid=1 pid=4571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:22.970965 kernel: audit: type=1300 audit(1768353022.775:603): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe727ed0b0 a2=3 a3=0 items=0 ppid=1 pid=4571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:22.775000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:22.997596 kernel: audit: type=1327 audit(1768353022.775:603): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:22.901000 audit[4571]: USER_START pid=4571 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:23.056624 kernel: audit: type=1105 audit(1768353022.901:604): pid=4571 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:23.131132 kernel: audit: type=1103 audit(1768353022.908:605): pid=4575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:22.908000 audit[4575]: CRED_ACQ pid=4575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:23.547570 sshd[4575]: Connection closed by 10.0.0.1 port 34056 Jan 14 01:10:23.549239 sshd-session[4571]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:23.557000 audit[4571]: USER_END pid=4571 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:23.566188 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:34056.service: Deactivated successfully. Jan 14 01:10:23.573305 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:10:23.589633 systemd-logind[1635]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:10:23.620166 systemd-logind[1635]: Removed session 10. Jan 14 01:10:23.637476 kernel: audit: type=1106 audit(1768353023.557:606): pid=4571 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:23.637570 kernel: audit: type=1104 audit(1768353023.558:607): pid=4571 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:23.558000 audit[4571]: CRED_DISP pid=4571 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:23.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.95:22-10.0.0.1:34056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:24.220075 kubelet[2949]: E0114 01:10:24.219590 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:24.240455 containerd[1661]: time="2026-01-14T01:10:24.240345575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:24.667629 containerd[1661]: time="2026-01-14T01:10:24.667482194Z" level=error msg="Failed to destroy network for sandbox \"ce33010b4ff345ba14fd7eddfa84aab3d14e986192e4a660ff4ac267b0d117de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:24.680994 systemd[1]: run-netns-cni\x2de271b6e3\x2d242c\x2d36d8\x2d7405\x2d99bc2952ba4f.mount: Deactivated successfully. Jan 14 01:10:24.702936 containerd[1661]: time="2026-01-14T01:10:24.702451548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce33010b4ff345ba14fd7eddfa84aab3d14e986192e4a660ff4ac267b0d117de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:24.706027 kubelet[2949]: E0114 01:10:24.705419 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce33010b4ff345ba14fd7eddfa84aab3d14e986192e4a660ff4ac267b0d117de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:24.706027 kubelet[2949]: E0114 01:10:24.705604 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce33010b4ff345ba14fd7eddfa84aab3d14e986192e4a660ff4ac267b0d117de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:10:24.708243 kubelet[2949]: E0114 01:10:24.707969 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce33010b4ff345ba14fd7eddfa84aab3d14e986192e4a660ff4ac267b0d117de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mwb9m" Jan 14 01:10:24.708243 kubelet[2949]: E0114 01:10:24.708174 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mwb9m_kube-system(c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce33010b4ff345ba14fd7eddfa84aab3d14e986192e4a660ff4ac267b0d117de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mwb9m" podUID="c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e" Jan 14 01:10:28.226177 kubelet[2949]: E0114 01:10:28.223368 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:28.231156 containerd[1661]: time="2026-01-14T01:10:28.229331337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:28.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.95:22-10.0.0.1:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:28.579011 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:47236.service - OpenSSH per-connection server daemon (10.0.0.1:47236). Jan 14 01:10:28.583249 containerd[1661]: time="2026-01-14T01:10:28.580989119Z" level=error msg="Failed to destroy network for sandbox \"3bf0b4bf18de97c2a510a14fd4bce44c929486178083cbb7069e314e47b6d0b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:28.587136 systemd[1]: run-netns-cni\x2d3648d59c\x2d9b40\x2d07c6\x2d79d9\x2d0f46582a8db5.mount: Deactivated successfully. Jan 14 01:10:28.615340 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:10:28.615458 kernel: audit: type=1130 audit(1768353028.578:609): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.95:22-10.0.0.1:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:28.661295 containerd[1661]: time="2026-01-14T01:10:28.657628795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf0b4bf18de97c2a510a14fd4bce44c929486178083cbb7069e314e47b6d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:28.662106 kubelet[2949]: E0114 01:10:28.658386 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf0b4bf18de97c2a510a14fd4bce44c929486178083cbb7069e314e47b6d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:28.662106 kubelet[2949]: E0114 01:10:28.658454 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf0b4bf18de97c2a510a14fd4bce44c929486178083cbb7069e314e47b6d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:10:28.662106 kubelet[2949]: E0114 01:10:28.658485 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf0b4bf18de97c2a510a14fd4bce44c929486178083cbb7069e314e47b6d0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pvf55" Jan 14 01:10:28.673033 kubelet[2949]: E0114 01:10:28.658552 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pvf55_kube-system(6b7ab4e1-8df7-452b-9e94-dfd2290c9d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bf0b4bf18de97c2a510a14fd4bce44c929486178083cbb7069e314e47b6d0b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pvf55" podUID="6b7ab4e1-8df7-452b-9e94-dfd2290c9d55" Jan 14 01:10:28.857000 audit[4658]: USER_ACCT pid=4658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:28.866324 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 47236 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:28.865108 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:28.912398 kernel: audit: type=1101 audit(1768353028.857:610): pid=4658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:28.862000 audit[4658]: CRED_ACQ pid=4658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:28.953177 systemd-logind[1635]: New session 11 of user core. Jan 14 01:10:29.005358 kernel: audit: type=1103 audit(1768353028.862:611): pid=4658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.005517 kernel: audit: type=1006 audit(1768353028.862:612): pid=4658 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:10:28.862000 audit[4658]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea4906890 a2=3 a3=0 items=0 ppid=1 pid=4658 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:29.068411 kernel: audit: type=1300 audit(1768353028.862:612): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea4906890 a2=3 a3=0 items=0 ppid=1 pid=4658 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:29.068545 kernel: audit: type=1327 audit(1768353028.862:612): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:28.862000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:29.071475 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:10:29.093000 audit[4658]: USER_START pid=4658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.161114 kernel: audit: type=1105 audit(1768353029.093:613): pid=4658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.099000 audit[4662]: CRED_ACQ pid=4662 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.210056 kernel: audit: type=1103 audit(1768353029.099:614): pid=4662 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.773128 sshd[4662]: Connection closed by 10.0.0.1 port 47236 Jan 14 01:10:29.773620 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:29.775000 audit[4658]: USER_END pid=4658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.787171 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:47236.service: Deactivated successfully. Jan 14 01:10:29.795400 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:10:29.800489 systemd-logind[1635]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:10:29.805478 systemd-logind[1635]: Removed session 11. Jan 14 01:10:29.847152 kernel: audit: type=1106 audit(1768353029.775:615): pid=4658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.776000 audit[4658]: CRED_DISP pid=4658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:29.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.95:22-10.0.0.1:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:29.896192 kernel: audit: type=1104 audit(1768353029.776:616): pid=4658 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.238392 containerd[1661]: time="2026-01-14T01:10:30.237418811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:30.413047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884204699.mount: Deactivated successfully. Jan 14 01:10:30.620243 containerd[1661]: time="2026-01-14T01:10:30.619604193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:10:30.641311 containerd[1661]: time="2026-01-14T01:10:30.641259504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:10:30.648220 containerd[1661]: time="2026-01-14T01:10:30.647278525Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:10:30.656331 containerd[1661]: time="2026-01-14T01:10:30.656066916Z" level=error msg="Failed to destroy network for sandbox \"61eefd1578b03dc7dd816b626161d36fe48c829db70badf7a4b81494d52d1399\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:30.658231 containerd[1661]: time="2026-01-14T01:10:30.656486332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:10:30.664395 containerd[1661]: time="2026-01-14T01:10:30.664357271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 41.567813124s" Jan 14 01:10:30.665095 systemd[1]: run-netns-cni\x2da61bd3f7\x2d16e6\x2d783e\x2dfbc8\x2d1eb7e9aa3e4e.mount: Deactivated successfully. Jan 14 01:10:30.666984 containerd[1661]: time="2026-01-14T01:10:30.665245709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:10:30.677143 containerd[1661]: time="2026-01-14T01:10:30.676608111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d97df889-zplfq,Uid:6aa9117b-386d-4c03-8126-035c7bae8bf4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eefd1578b03dc7dd816b626161d36fe48c829db70badf7a4b81494d52d1399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:30.680565 kubelet[2949]: E0114 01:10:30.680176 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eefd1578b03dc7dd816b626161d36fe48c829db70badf7a4b81494d52d1399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:30.680565 kubelet[2949]: E0114 01:10:30.680368 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eefd1578b03dc7dd816b626161d36fe48c829db70badf7a4b81494d52d1399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:10:30.680565 kubelet[2949]: E0114 01:10:30.680406 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eefd1578b03dc7dd816b626161d36fe48c829db70badf7a4b81494d52d1399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d97df889-zplfq" Jan 14 01:10:30.682068 kubelet[2949]: E0114 01:10:30.680471 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d97df889-zplfq_calico-system(6aa9117b-386d-4c03-8126-035c7bae8bf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61eefd1578b03dc7dd816b626161d36fe48c829db70badf7a4b81494d52d1399\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d97df889-zplfq" podUID="6aa9117b-386d-4c03-8126-035c7bae8bf4" Jan 14 01:10:30.754916 containerd[1661]: time="2026-01-14T01:10:30.754415479Z" level=info msg="CreateContainer within sandbox \"23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:10:30.878424 containerd[1661]: time="2026-01-14T01:10:30.878273775Z" level=info msg="Container af23fd62a371256a51d1630d11ab5a1d4f45c94201424271c556c148a956fa89: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:10:30.922333 containerd[1661]: time="2026-01-14T01:10:30.922280075Z" level=info msg="CreateContainer within sandbox \"23497f370fd9de815a478c9b910626c951ea5fa3a24efe82c19535640028980f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"af23fd62a371256a51d1630d11ab5a1d4f45c94201424271c556c148a956fa89\"" Jan 14 01:10:30.930573 containerd[1661]: time="2026-01-14T01:10:30.930264948Z" level=info msg="StartContainer for \"af23fd62a371256a51d1630d11ab5a1d4f45c94201424271c556c148a956fa89\"" Jan 14 01:10:30.943247 containerd[1661]: time="2026-01-14T01:10:30.943207592Z" level=info msg="connecting to shim af23fd62a371256a51d1630d11ab5a1d4f45c94201424271c556c148a956fa89" address="unix:///run/containerd/s/8468593821133bdf76f85428454cd58917bdcebdd39eea30032871f2f0bbdc6b" protocol=ttrpc version=3 Jan 14 01:10:31.069502 systemd[1]: Started cri-containerd-af23fd62a371256a51d1630d11ab5a1d4f45c94201424271c556c148a956fa89.scope - libcontainer container af23fd62a371256a51d1630d11ab5a1d4f45c94201424271c556c148a956fa89. Jan 14 01:10:31.221944 containerd[1661]: time="2026-01-14T01:10:31.221347204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:31.225559 containerd[1661]: time="2026-01-14T01:10:31.224050917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:31.226096 containerd[1661]: time="2026-01-14T01:10:31.225535758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:31.226096 containerd[1661]: time="2026-01-14T01:10:31.225569681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:31.250000 audit: BPF prog-id=175 op=LOAD Jan 14 01:10:31.250000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3509 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:31.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323366643632613337313235366135316431363330643131616235 Jan 14 01:10:31.250000 audit: BPF prog-id=176 op=LOAD Jan 14 01:10:31.250000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3509 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:31.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323366643632613337313235366135316431363330643131616235 Jan 14 01:10:31.250000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:10:31.250000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:31.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323366643632613337313235366135316431363330643131616235 Jan 14 01:10:31.250000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:10:31.250000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:31.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323366643632613337313235366135316431363330643131616235 Jan 14 01:10:31.250000 audit: BPF prog-id=177 op=LOAD Jan 14 01:10:31.250000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3509 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:31.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323366643632613337313235366135316431363330643131616235 Jan 14 01:10:31.593429 containerd[1661]: time="2026-01-14T01:10:31.592361851Z" level=info msg="StartContainer for \"af23fd62a371256a51d1630d11ab5a1d4f45c94201424271c556c148a956fa89\" returns successfully" Jan 14 01:10:31.736135 kubelet[2949]: E0114 01:10:31.735475 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:31.994224 containerd[1661]: time="2026-01-14T01:10:31.993525943Z" level=error msg="Failed to destroy network for sandbox \"be10e4970a9d32d3db9c440e3590373f22a0e980a468b5c7503073d36294c5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.007601 containerd[1661]: time="2026-01-14T01:10:32.006389559Z" level=error msg="Failed to destroy network for sandbox \"7a3fb34ed28adf4d9f9e0e64af382cf0bf03f89edcf2068b1d05a6a1e5cd5101\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.016606 systemd[1]: run-netns-cni\x2d2e5d5079\x2dd19c\x2d4393\x2df045\x2d1f2c9db27b77.mount: Deactivated successfully. Jan 14 01:10:32.028380 containerd[1661]: time="2026-01-14T01:10:32.028111982Z" level=error msg="Failed to destroy network for sandbox \"dfa73f93e69697baa9caff5690aa1420bfa9b3765a39b2a6b27804cdcb19e4bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.044502 containerd[1661]: time="2026-01-14T01:10:32.043401305Z" level=error msg="Failed to destroy network for sandbox \"4bc2c00424e009d1c1a6c868ac7ff5d8995563f66a290bc8ff927b06eb2a779a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.045312 containerd[1661]: time="2026-01-14T01:10:32.045266637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10e4970a9d32d3db9c440e3590373f22a0e980a468b5c7503073d36294c5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.052570 kubelet[2949]: E0114 01:10:32.052206 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10e4970a9d32d3db9c440e3590373f22a0e980a468b5c7503073d36294c5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.052570 kubelet[2949]: E0114 01:10:32.052295 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10e4970a9d32d3db9c440e3590373f22a0e980a468b5c7503073d36294c5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:10:32.060125 kubelet[2949]: E0114 01:10:32.059031 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10e4970a9d32d3db9c440e3590373f22a0e980a468b5c7503073d36294c5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mrnrg" Jan 14 01:10:32.069108 kubelet[2949]: E0114 01:10:32.068098 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be10e4970a9d32d3db9c440e3590373f22a0e980a468b5c7503073d36294c5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:10:32.071429 containerd[1661]: time="2026-01-14T01:10:32.071055916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfa73f93e69697baa9caff5690aa1420bfa9b3765a39b2a6b27804cdcb19e4bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.075117 kubelet[2949]: E0114 01:10:32.073010 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfa73f93e69697baa9caff5690aa1420bfa9b3765a39b2a6b27804cdcb19e4bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.075117 kubelet[2949]: E0114 01:10:32.073070 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfa73f93e69697baa9caff5690aa1420bfa9b3765a39b2a6b27804cdcb19e4bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:10:32.075117 kubelet[2949]: E0114 01:10:32.073097 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfa73f93e69697baa9caff5690aa1420bfa9b3765a39b2a6b27804cdcb19e4bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" Jan 14 01:10:32.075310 kubelet[2949]: E0114 01:10:32.073155 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfa73f93e69697baa9caff5690aa1420bfa9b3765a39b2a6b27804cdcb19e4bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:10:32.102347 containerd[1661]: time="2026-01-14T01:10:32.098588886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a3fb34ed28adf4d9f9e0e64af382cf0bf03f89edcf2068b1d05a6a1e5cd5101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.109208 containerd[1661]: time="2026-01-14T01:10:32.108140032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2c00424e009d1c1a6c868ac7ff5d8995563f66a290bc8ff927b06eb2a779a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.143513 kubelet[2949]: E0114 01:10:32.143237 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2c00424e009d1c1a6c868ac7ff5d8995563f66a290bc8ff927b06eb2a779a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.146122 kubelet[2949]: E0114 01:10:32.143473 2949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a3fb34ed28adf4d9f9e0e64af382cf0bf03f89edcf2068b1d05a6a1e5cd5101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:10:32.146506 kubelet[2949]: E0114 01:10:32.146383 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a3fb34ed28adf4d9f9e0e64af382cf0bf03f89edcf2068b1d05a6a1e5cd5101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:10:32.148283 kubelet[2949]: E0114 01:10:32.146333 2949 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2c00424e009d1c1a6c868ac7ff5d8995563f66a290bc8ff927b06eb2a779a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:10:32.152971 kubelet[2949]: E0114 01:10:32.151997 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2c00424e009d1c1a6c868ac7ff5d8995563f66a290bc8ff927b06eb2a779a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbvx7" Jan 14 01:10:32.152971 kubelet[2949]: E0114 01:10:32.152181 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bc2c00424e009d1c1a6c868ac7ff5d8995563f66a290bc8ff927b06eb2a779a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:10:32.152971 kubelet[2949]: E0114 01:10:32.148512 2949 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a3fb34ed28adf4d9f9e0e64af382cf0bf03f89edcf2068b1d05a6a1e5cd5101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" Jan 14 01:10:32.153370 kubelet[2949]: E0114 01:10:32.152451 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a3fb34ed28adf4d9f9e0e64af382cf0bf03f89edcf2068b1d05a6a1e5cd5101\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:10:32.273153 systemd[1]: run-netns-cni\x2d7417ae81\x2df48f\x2de285\x2d04c4\x2dc5cef28e2c6d.mount: Deactivated successfully. Jan 14 01:10:32.273412 systemd[1]: run-netns-cni\x2df5f5cfe2\x2d3a5d\x2d5e2b\x2db618\x2d5857a0d10043.mount: Deactivated successfully. Jan 14 01:10:32.273516 systemd[1]: run-netns-cni\x2d34868764\x2d4dad\x2d82f3\x2dab4b\x2deb987437fd16.mount: Deactivated successfully. Jan 14 01:10:32.622017 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:10:32.622142 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:10:32.753259 kubelet[2949]: E0114 01:10:32.752313 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:33.515189 kubelet[2949]: I0114 01:10:33.515112 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l6kp8" podStartSLOduration=5.621960476 podStartE2EDuration="1m12.515096218s" podCreationTimestamp="2026-01-14 01:09:21 +0000 UTC" firstStartedPulling="2026-01-14 01:09:23.786255978 +0000 UTC m=+56.444704064" lastFinishedPulling="2026-01-14 01:10:30.679391719 +0000 UTC m=+123.337839806" observedRunningTime="2026-01-14 01:10:31.841450081 +0000 UTC m=+124.499898178" watchObservedRunningTime="2026-01-14 01:10:33.515096218 +0000 UTC m=+126.173544294" Jan 14 01:10:33.672082 kubelet[2949]: I0114 01:10:33.671178 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-ca-bundle\") pod \"6aa9117b-386d-4c03-8126-035c7bae8bf4\" (UID: \"6aa9117b-386d-4c03-8126-035c7bae8bf4\") " Jan 14 01:10:33.672082 kubelet[2949]: I0114 01:10:33.671240 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-backend-key-pair\") pod \"6aa9117b-386d-4c03-8126-035c7bae8bf4\" (UID: \"6aa9117b-386d-4c03-8126-035c7bae8bf4\") " Jan 14 01:10:33.672082 kubelet[2949]: I0114 01:10:33.671263 2949 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt2vt\" (UniqueName: \"kubernetes.io/projected/6aa9117b-386d-4c03-8126-035c7bae8bf4-kube-api-access-vt2vt\") pod \"6aa9117b-386d-4c03-8126-035c7bae8bf4\" (UID: \"6aa9117b-386d-4c03-8126-035c7bae8bf4\") " Jan 14 01:10:33.672082 kubelet[2949]: I0114 01:10:33.671588 2949 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6aa9117b-386d-4c03-8126-035c7bae8bf4" (UID: "6aa9117b-386d-4c03-8126-035c7bae8bf4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:10:33.673107 kubelet[2949]: I0114 01:10:33.672415 2949 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 01:10:33.727255 kubelet[2949]: I0114 01:10:33.727195 2949 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6aa9117b-386d-4c03-8126-035c7bae8bf4" (UID: "6aa9117b-386d-4c03-8126-035c7bae8bf4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:10:33.731259 systemd[1]: var-lib-kubelet-pods-6aa9117b\x2d386d\x2d4c03\x2d8126\x2d035c7bae8bf4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvt2vt.mount: Deactivated successfully. Jan 14 01:10:33.733073 systemd[1]: var-lib-kubelet-pods-6aa9117b\x2d386d\x2d4c03\x2d8126\x2d035c7bae8bf4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:10:33.738130 kubelet[2949]: I0114 01:10:33.732128 2949 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa9117b-386d-4c03-8126-035c7bae8bf4-kube-api-access-vt2vt" (OuterVolumeSpecName: "kube-api-access-vt2vt") pod "6aa9117b-386d-4c03-8126-035c7bae8bf4" (UID: "6aa9117b-386d-4c03-8126-035c7bae8bf4"). InnerVolumeSpecName "kube-api-access-vt2vt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:10:33.775226 kubelet[2949]: I0114 01:10:33.772572 2949 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6aa9117b-386d-4c03-8126-035c7bae8bf4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 01:10:33.775226 kubelet[2949]: I0114 01:10:33.772609 2949 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vt2vt\" (UniqueName: \"kubernetes.io/projected/6aa9117b-386d-4c03-8126-035c7bae8bf4-kube-api-access-vt2vt\") on node \"localhost\" DevicePath \"\"" Jan 14 01:10:33.780081 systemd[1]: Removed slice kubepods-besteffort-pod6aa9117b_386d_4c03_8126_035c7bae8bf4.slice - libcontainer container kubepods-besteffort-pod6aa9117b_386d_4c03_8126_035c7bae8bf4.slice. Jan 14 01:10:34.129403 systemd[1]: Created slice kubepods-besteffort-pod8c890f23_aecb_4f6e_852c_98f6f05cf99b.slice - libcontainer container kubepods-besteffort-pod8c890f23_aecb_4f6e_852c_98f6f05cf99b.slice. Jan 14 01:10:34.182335 kubelet[2949]: I0114 01:10:34.179229 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c890f23-aecb-4f6e-852c-98f6f05cf99b-whisker-backend-key-pair\") pod \"whisker-79654cf445-zt5b8\" (UID: \"8c890f23-aecb-4f6e-852c-98f6f05cf99b\") " pod="calico-system/whisker-79654cf445-zt5b8" Jan 14 01:10:34.182335 kubelet[2949]: I0114 01:10:34.179419 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c890f23-aecb-4f6e-852c-98f6f05cf99b-whisker-ca-bundle\") pod \"whisker-79654cf445-zt5b8\" (UID: \"8c890f23-aecb-4f6e-852c-98f6f05cf99b\") " pod="calico-system/whisker-79654cf445-zt5b8" Jan 14 01:10:34.182335 kubelet[2949]: I0114 01:10:34.179456 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wskk\" (UniqueName: \"kubernetes.io/projected/8c890f23-aecb-4f6e-852c-98f6f05cf99b-kube-api-access-7wskk\") pod \"whisker-79654cf445-zt5b8\" (UID: \"8c890f23-aecb-4f6e-852c-98f6f05cf99b\") " pod="calico-system/whisker-79654cf445-zt5b8" Jan 14 01:10:34.252532 kubelet[2949]: I0114 01:10:34.251362 2949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa9117b-386d-4c03-8126-035c7bae8bf4" path="/var/lib/kubelet/pods/6aa9117b-386d-4c03-8126-035c7bae8bf4/volumes" Jan 14 01:10:34.467269 containerd[1661]: time="2026-01-14T01:10:34.459264001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79654cf445-zt5b8,Uid:8c890f23-aecb-4f6e-852c-98f6f05cf99b,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:34.814155 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:50154.service - OpenSSH per-connection server daemon (10.0.0.1:50154). Jan 14 01:10:34.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.95:22-10.0.0.1:50154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.834540 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 14 01:10:34.834627 kernel: audit: type=1130 audit(1768353034.813:623): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.95:22-10.0.0.1:50154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.201000 audit[4986]: USER_ACCT pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:35.267273 kernel: audit: type=1101 audit(1768353035.201:624): pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:35.213295 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:35.270342 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 50154 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:35.209000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:35.336206 kernel: audit: type=1103 audit(1768353035.209:625): pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:35.301117 systemd-logind[1635]: New session 12 of user core. Jan 14 01:10:35.389224 kernel: audit: type=1006 audit(1768353035.209:626): pid=4986 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:10:35.209000 audit[4986]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd49304f0 a2=3 a3=0 items=0 ppid=1 pid=4986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:35.459169 kernel: audit: type=1300 audit(1768353035.209:626): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd49304f0 a2=3 a3=0 items=0 ppid=1 pid=4986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:35.459543 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:10:35.209000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:35.481274 kernel: audit: type=1327 audit(1768353035.209:626): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:35.483000 audit[4986]: USER_START pid=4986 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:35.543248 kernel: audit: type=1105 audit(1768353035.483:627): pid=4986 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:35.542000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:35.595988 kernel: audit: type=1103 audit(1768353035.542:628): pid=5084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.175974 sshd[5084]: Connection closed by 10.0.0.1 port 50154 Jan 14 01:10:36.178488 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:36.250434 kernel: audit: type=1106 audit(1768353036.193:629): pid=4986 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.193000 audit[4986]: USER_END pid=4986 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.258159 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:50154.service: Deactivated successfully. Jan 14 01:10:36.271003 containerd[1661]: time="2026-01-14T01:10:36.268101042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:36.275149 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:10:36.194000 audit[4986]: CRED_DISP pid=4986 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.294275 systemd-logind[1635]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:10:36.320422 systemd-logind[1635]: Removed session 12. Jan 14 01:10:36.346547 kernel: audit: type=1104 audit(1768353036.194:630): pid=4986 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.95:22-10.0.0.1:50154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:36.824430 systemd-networkd[1424]: cali9341a6dd4c5: Link UP Jan 14 01:10:36.840326 systemd-networkd[1424]: cali9341a6dd4c5: Gained carrier Jan 14 01:10:37.061137 containerd[1661]: 2026-01-14 01:10:34.687 [INFO][4964] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:10:37.061137 containerd[1661]: 2026-01-14 01:10:34.876 [INFO][4964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79654cf445--zt5b8-eth0 whisker-79654cf445- calico-system 8c890f23-aecb-4f6e-852c-98f6f05cf99b 1258 0 2026-01-14 01:10:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79654cf445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79654cf445-zt5b8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9341a6dd4c5 [] [] }} ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-" Jan 14 01:10:37.061137 containerd[1661]: 2026-01-14 01:10:34.876 [INFO][4964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-eth0" Jan 14 01:10:37.061137 containerd[1661]: 2026-01-14 01:10:35.979 [INFO][4992] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" HandleID="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Workload="localhost-k8s-whisker--79654cf445--zt5b8-eth0" Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:35.988 [INFO][4992] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" HandleID="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Workload="localhost-k8s-whisker--79654cf445--zt5b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79654cf445-zt5b8", "timestamp":"2026-01-14 01:10:35.979782406 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:35.988 [INFO][4992] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:35.988 [INFO][4992] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:35.993 [INFO][4992] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:36.106 [INFO][4992] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" host="localhost" Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:36.187 [INFO][4992] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:36.342 [INFO][4992] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:36.448 [INFO][4992] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:36.474 [INFO][4992] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:37.063145 containerd[1661]: 2026-01-14 01:10:36.481 [INFO][4992] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" host="localhost" Jan 14 01:10:37.064190 containerd[1661]: 2026-01-14 01:10:36.501 [INFO][4992] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869 Jan 14 01:10:37.064190 containerd[1661]: 2026-01-14 01:10:36.532 [INFO][4992] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" host="localhost" Jan 14 01:10:37.064190 containerd[1661]: 2026-01-14 01:10:36.578 [INFO][4992] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" host="localhost" Jan 14 01:10:37.064190 containerd[1661]: 2026-01-14 01:10:36.580 [INFO][4992] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" host="localhost" Jan 14 01:10:37.064190 containerd[1661]: 2026-01-14 01:10:36.580 [INFO][4992] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:37.064190 containerd[1661]: 2026-01-14 01:10:36.580 [INFO][4992] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" HandleID="k8s-pod-network.49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Workload="localhost-k8s-whisker--79654cf445--zt5b8-eth0" Jan 14 01:10:37.064386 containerd[1661]: 2026-01-14 01:10:36.610 [INFO][4964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79654cf445--zt5b8-eth0", GenerateName:"whisker-79654cf445-", Namespace:"calico-system", SelfLink:"", UID:"8c890f23-aecb-4f6e-852c-98f6f05cf99b", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79654cf445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79654cf445-zt5b8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9341a6dd4c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:37.064386 containerd[1661]: 2026-01-14 01:10:36.610 [INFO][4964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-eth0" Jan 14 01:10:37.070325 containerd[1661]: 2026-01-14 01:10:36.610 [INFO][4964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9341a6dd4c5 ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-eth0" Jan 14 01:10:37.070325 containerd[1661]: 2026-01-14 01:10:36.852 [INFO][4964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-eth0" Jan 14 01:10:37.070422 containerd[1661]: 2026-01-14 01:10:36.854 [INFO][4964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79654cf445--zt5b8-eth0", GenerateName:"whisker-79654cf445-", Namespace:"calico-system", SelfLink:"", UID:"8c890f23-aecb-4f6e-852c-98f6f05cf99b", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79654cf445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869", Pod:"whisker-79654cf445-zt5b8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9341a6dd4c5", MAC:"d6:83:04:52:fb:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:37.070990 containerd[1661]: 2026-01-14 01:10:37.032 [INFO][4964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" Namespace="calico-system" Pod="whisker-79654cf445-zt5b8" WorkloadEndpoint="localhost-k8s-whisker--79654cf445--zt5b8-eth0" Jan 14 01:10:37.494268 containerd[1661]: time="2026-01-14T01:10:37.493262238Z" level=info msg="connecting to shim 49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869" address="unix:///run/containerd/s/03266f4b747f3f7dc58b08de1f424d186834afc24c785de8922f926c8e041c72" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:37.709307 systemd-networkd[1424]: cali587fb23712a: Link UP Jan 14 01:10:37.717545 systemd-networkd[1424]: cali587fb23712a: Gained carrier Jan 14 01:10:37.842111 containerd[1661]: 2026-01-14 01:10:36.590 [INFO][5119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:10:37.842111 containerd[1661]: 2026-01-14 01:10:36.672 [INFO][5119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0 calico-kube-controllers-cd8889796- calico-system 4210e14f-14d6-426e-8696-17d6edfc7412 1025 0 2026-01-14 01:09:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd8889796 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cd8889796-8dksn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali587fb23712a [] [] }} ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-" Jan 14 01:10:37.842111 containerd[1661]: 2026-01-14 01:10:36.672 [INFO][5119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" Jan 14 01:10:37.842111 containerd[1661]: 2026-01-14 01:10:37.058 [INFO][5143] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" HandleID="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Workload="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.060 [INFO][5143] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" HandleID="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Workload="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cd8889796-8dksn", "timestamp":"2026-01-14 01:10:37.058146754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.060 [INFO][5143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.060 [INFO][5143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.060 [INFO][5143] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.098 [INFO][5143] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" host="localhost" Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.147 [INFO][5143] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.208 [INFO][5143] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.236 [INFO][5143] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.262 [INFO][5143] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:37.843025 containerd[1661]: 2026-01-14 01:10:37.264 [INFO][5143] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" host="localhost" Jan 14 01:10:37.874170 containerd[1661]: 2026-01-14 01:10:37.286 [INFO][5143] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13 Jan 14 01:10:37.874170 containerd[1661]: 2026-01-14 01:10:37.318 [INFO][5143] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" host="localhost" Jan 14 01:10:37.874170 containerd[1661]: 2026-01-14 01:10:37.615 [INFO][5143] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" host="localhost" Jan 14 01:10:37.874170 containerd[1661]: 2026-01-14 01:10:37.622 [INFO][5143] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" host="localhost" Jan 14 01:10:37.874170 containerd[1661]: 2026-01-14 01:10:37.622 [INFO][5143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:37.874170 containerd[1661]: 2026-01-14 01:10:37.622 [INFO][5143] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" HandleID="k8s-pod-network.25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Workload="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" Jan 14 01:10:37.878629 containerd[1661]: 2026-01-14 01:10:37.683 [INFO][5119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0", GenerateName:"calico-kube-controllers-cd8889796-", Namespace:"calico-system", SelfLink:"", UID:"4210e14f-14d6-426e-8696-17d6edfc7412", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8889796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cd8889796-8dksn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali587fb23712a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:37.880004 containerd[1661]: 2026-01-14 01:10:37.683 [INFO][5119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" Jan 14 01:10:37.880004 containerd[1661]: 2026-01-14 01:10:37.683 [INFO][5119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali587fb23712a ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" Jan 14 01:10:37.880004 containerd[1661]: 2026-01-14 01:10:37.743 [INFO][5119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" Jan 14 01:10:37.880127 containerd[1661]: 2026-01-14 01:10:37.749 [INFO][5119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0", GenerateName:"calico-kube-controllers-cd8889796-", Namespace:"calico-system", SelfLink:"", UID:"4210e14f-14d6-426e-8696-17d6edfc7412", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd8889796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13", Pod:"calico-kube-controllers-cd8889796-8dksn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali587fb23712a", MAC:"56:8e:64:1c:e2:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:37.884442 containerd[1661]: 2026-01-14 01:10:37.819 [INFO][5119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" Namespace="calico-system" Pod="calico-kube-controllers-cd8889796-8dksn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cd8889796--8dksn-eth0" Jan 14 01:10:37.983623 systemd[1]: Started cri-containerd-49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869.scope - libcontainer container 49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869. Jan 14 01:10:38.081000 audit: BPF prog-id=178 op=LOAD Jan 14 01:10:38.081000 audit[5239]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff5f2ec90 a2=98 a3=1fffffffffffffff items=0 ppid=5017 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.081000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:10:38.081000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:10:38.081000 audit[5239]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff5f2ec60 a3=0 items=0 ppid=5017 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.081000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:10:38.081000 audit: BPF prog-id=179 op=LOAD Jan 14 01:10:38.081000 audit[5239]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff5f2eb70 a2=94 a3=3 items=0 ppid=5017 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.081000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:10:38.083000 audit: BPF prog-id=179 op=UNLOAD Jan 14 01:10:38.083000 audit[5239]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff5f2eb70 a2=94 a3=3 items=0 ppid=5017 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.083000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:10:38.083000 audit: BPF prog-id=180 op=LOAD Jan 14 01:10:38.083000 audit[5239]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff5f2ebb0 a2=94 a3=7ffff5f2ed90 items=0 ppid=5017 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.083000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:10:38.083000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:10:38.083000 audit[5239]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff5f2ebb0 a2=94 a3=7ffff5f2ed90 items=0 ppid=5017 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.083000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:10:38.116408 containerd[1661]: time="2026-01-14T01:10:38.113358432Z" level=info msg="connecting to shim 25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13" address="unix:///run/containerd/s/602891aaf1c8d114b09de77101b6f924eee96fc0c6df757d2155b985f4731c1b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:38.148000 audit: BPF prog-id=181 op=LOAD Jan 14 01:10:38.148000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe68eb4400 a2=98 a3=3 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.148000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:38.148000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:10:38.148000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe68eb43d0 a3=0 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.148000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:38.155000 audit: BPF prog-id=182 op=LOAD Jan 14 01:10:38.155000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe68eb41f0 a2=94 a3=54428f items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.155000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:38.155000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:10:38.155000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe68eb41f0 a2=94 a3=54428f items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.155000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:38.155000 audit: BPF prog-id=183 op=LOAD Jan 14 01:10:38.155000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe68eb4220 a2=94 a3=2 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.155000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:38.155000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:10:38.155000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe68eb4220 a2=0 a3=2 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.155000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:38.162000 audit: BPF prog-id=184 op=LOAD Jan 14 01:10:38.165000 audit: BPF prog-id=185 op=LOAD Jan 14 01:10:38.165000 audit[5195]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=5185 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439653935393964623036343464666630323962666166373931653865 Jan 14 01:10:38.165000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:10:38.165000 audit[5195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439653935393964623036343464666630323962666166373931653865 Jan 14 01:10:38.166000 audit: BPF prog-id=186 op=LOAD Jan 14 01:10:38.166000 audit[5195]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=5185 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439653935393964623036343464666630323962666166373931653865 Jan 14 01:10:38.167000 audit: BPF prog-id=187 op=LOAD Jan 14 01:10:38.167000 audit[5195]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=5185 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439653935393964623036343464666630323962666166373931653865 Jan 14 01:10:38.168000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:10:38.168000 audit[5195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439653935393964623036343464666630323962666166373931653865 Jan 14 01:10:38.168000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:10:38.168000 audit[5195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439653935393964623036343464666630323962666166373931653865 Jan 14 01:10:38.169000 audit: BPF prog-id=188 op=LOAD Jan 14 01:10:38.169000 audit[5195]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=5185 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439653935393964623036343464666630323962666166373931653865 Jan 14 01:10:38.176275 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:38.317568 systemd-networkd[1424]: cali9341a6dd4c5: Gained IPv6LL Jan 14 01:10:38.318434 systemd[1]: Started cri-containerd-25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13.scope - libcontainer container 25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13. Jan 14 01:10:38.432000 audit: BPF prog-id=189 op=LOAD Jan 14 01:10:38.435000 audit: BPF prog-id=190 op=LOAD Jan 14 01:10:38.435000 audit[5253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5238 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393532663535373764346231333537353863386635346265343634 Jan 14 01:10:38.439000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:10:38.439000 audit[5253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393532663535373764346231333537353863386635346265343634 Jan 14 01:10:38.440000 audit: BPF prog-id=191 op=LOAD Jan 14 01:10:38.440000 audit[5253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5238 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393532663535373764346231333537353863386635346265343634 Jan 14 01:10:38.440000 audit: BPF prog-id=192 op=LOAD Jan 14 01:10:38.440000 audit[5253]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5238 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393532663535373764346231333537353863386635346265343634 Jan 14 01:10:38.440000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:10:38.440000 audit[5253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393532663535373764346231333537353863386635346265343634 Jan 14 01:10:38.440000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:10:38.440000 audit[5253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5238 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393532663535373764346231333537353863386635346265343634 Jan 14 01:10:38.440000 audit: BPF prog-id=193 op=LOAD Jan 14 01:10:38.440000 audit[5253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5238 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:38.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393532663535373764346231333537353863386635346265343634 Jan 14 01:10:38.451246 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:38.585033 containerd[1661]: time="2026-01-14T01:10:38.582001885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79654cf445-zt5b8,Uid:8c890f23-aecb-4f6e-852c-98f6f05cf99b,Namespace:calico-system,Attempt:0,} returns sandbox id \"49e9599db0644dff029bfaf791e8e9ec0a98f1696bea49fd855fe4e47c572869\"" Jan 14 01:10:38.603986 containerd[1661]: time="2026-01-14T01:10:38.600146641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:10:38.770371 containerd[1661]: time="2026-01-14T01:10:38.770325403Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:38.793308 containerd[1661]: time="2026-01-14T01:10:38.793264931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd8889796-8dksn,Uid:4210e14f-14d6-426e-8696-17d6edfc7412,Namespace:calico-system,Attempt:0,} returns sandbox id \"25952f5577d4b135758c8f54be4648263cbfbeaffe6d3ff026e1a38456167a13\"" Jan 14 01:10:38.797380 containerd[1661]: time="2026-01-14T01:10:38.796601211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:10:38.797380 containerd[1661]: time="2026-01-14T01:10:38.797305084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:38.798345 kubelet[2949]: E0114 01:10:38.798300 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:10:38.803506 kubelet[2949]: E0114 01:10:38.802134 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:10:38.805051 kubelet[2949]: E0114 01:10:38.804998 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7d3c6d0447df46d88304143bcf710e70,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wskk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79654cf445-zt5b8_calico-system(8c890f23-aecb-4f6e-852c-98f6f05cf99b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:38.814107 containerd[1661]: time="2026-01-14T01:10:38.814074678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:10:38.959043 systemd-networkd[1424]: cali587fb23712a: Gained IPv6LL Jan 14 01:10:39.000210 containerd[1661]: time="2026-01-14T01:10:38.999537769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:39.039047 containerd[1661]: time="2026-01-14T01:10:39.036982495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:10:39.039047 containerd[1661]: time="2026-01-14T01:10:39.037070053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:39.045462 kubelet[2949]: E0114 01:10:39.045394 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:39.046239 kubelet[2949]: E0114 01:10:39.046200 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:39.051137 containerd[1661]: time="2026-01-14T01:10:39.049284266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:10:39.051420 kubelet[2949]: E0114 01:10:39.051345 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7ksr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:39.059411 kubelet[2949]: E0114 01:10:39.058378 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:10:39.187297 containerd[1661]: time="2026-01-14T01:10:39.187237796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:39.195611 containerd[1661]: time="2026-01-14T01:10:39.195469563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:10:39.195611 containerd[1661]: time="2026-01-14T01:10:39.195569620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:39.197414 kubelet[2949]: E0114 01:10:39.197371 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:10:39.197565 kubelet[2949]: E0114 01:10:39.197536 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:10:39.198184 kubelet[2949]: E0114 01:10:39.198120 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wskk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79654cf445-zt5b8_calico-system(8c890f23-aecb-4f6e-852c-98f6f05cf99b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:39.203160 kubelet[2949]: E0114 01:10:39.201407 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:10:39.218958 kubelet[2949]: E0114 01:10:39.218160 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:39.218958 kubelet[2949]: E0114 01:10:39.218510 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:39.219442 containerd[1661]: time="2026-01-14T01:10:39.219147348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:39.224342 containerd[1661]: time="2026-01-14T01:10:39.223033769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,}" Jan 14 01:10:39.247000 audit: BPF prog-id=194 op=LOAD Jan 14 01:10:39.247000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe68eb40e0 a2=94 a3=1 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.247000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.248000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:10:39.248000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe68eb40e0 a2=94 a3=1 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.248000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.271000 audit: BPF prog-id=195 op=LOAD Jan 14 01:10:39.271000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe68eb40d0 a2=94 a3=4 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.271000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.271000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:10:39.271000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe68eb40d0 a2=0 a3=4 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.271000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.272000 audit: BPF prog-id=196 op=LOAD Jan 14 01:10:39.272000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe68eb3f30 a2=94 a3=5 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.272000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.272000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:10:39.272000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe68eb3f30 a2=0 a3=5 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.272000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.272000 audit: BPF prog-id=197 op=LOAD Jan 14 01:10:39.272000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe68eb4150 a2=94 a3=6 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.272000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.272000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:10:39.272000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe68eb4150 a2=0 a3=6 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.272000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.273000 audit: BPF prog-id=198 op=LOAD Jan 14 01:10:39.273000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe68eb3900 a2=94 a3=88 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.273000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.273000 audit: BPF prog-id=199 op=LOAD Jan 14 01:10:39.273000 audit[5245]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe68eb3780 a2=94 a3=2 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.273000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.273000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:10:39.273000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe68eb37b0 a2=0 a3=7ffe68eb38b0 items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.273000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.274000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:10:39.274000 audit[5245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1725ad10 a2=0 a3=f18cd3aea317bd2d items=0 ppid=5017 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.274000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:10:39.453000 audit: BPF prog-id=200 op=LOAD Jan 14 01:10:39.453000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9326c720 a2=98 a3=1999999999999999 items=0 ppid=5017 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.453000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:10:39.454000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:10:39.454000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe9326c6f0 a3=0 items=0 ppid=5017 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:10:39.454000 audit: BPF prog-id=201 op=LOAD Jan 14 01:10:39.454000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9326c600 a2=94 a3=ffff items=0 ppid=5017 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:10:39.454000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:10:39.454000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe9326c600 a2=94 a3=ffff items=0 ppid=5017 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:10:39.454000 audit: BPF prog-id=202 op=LOAD Jan 14 01:10:39.454000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9326c640 a2=94 a3=7ffe9326c820 items=0 ppid=5017 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:10:39.454000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:10:39.454000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe9326c640 a2=94 a3=7ffe9326c820 items=0 ppid=5017 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:39.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:10:40.032256 kubelet[2949]: E0114 01:10:40.031354 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:10:40.042317 kubelet[2949]: E0114 01:10:40.041489 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:10:40.190377 systemd-networkd[1424]: vxlan.calico: Link UP Jan 14 01:10:40.190919 systemd-networkd[1424]: vxlan.calico: Gained carrier Jan 14 01:10:40.207247 systemd-networkd[1424]: cali69e53f634ef: Link UP Jan 14 01:10:40.213495 systemd-networkd[1424]: cali69e53f634ef: Gained carrier Jan 14 01:10:40.334377 containerd[1661]: 2026-01-14 01:10:39.496 [INFO][5285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0 coredns-674b8bbfcf- kube-system c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e 1022 0 2026-01-14 01:08:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mwb9m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69e53f634ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-" Jan 14 01:10:40.334377 containerd[1661]: 2026-01-14 01:10:39.496 [INFO][5285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" Jan 14 01:10:40.334377 containerd[1661]: 2026-01-14 01:10:39.799 [INFO][5323] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" HandleID="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Workload="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.801 [INFO][5323] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" HandleID="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Workload="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059fc80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mwb9m", "timestamp":"2026-01-14 01:10:39.799286008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.801 [INFO][5323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.803 [INFO][5323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.803 [INFO][5323] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.832 [INFO][5323] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" host="localhost" Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.900 [INFO][5323] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.940 [INFO][5323] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.955 [INFO][5323] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.971 [INFO][5323] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:40.335632 containerd[1661]: 2026-01-14 01:10:39.978 [INFO][5323] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" host="localhost" Jan 14 01:10:40.338465 containerd[1661]: 2026-01-14 01:10:40.024 [INFO][5323] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95 Jan 14 01:10:40.338465 containerd[1661]: 2026-01-14 01:10:40.085 [INFO][5323] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" host="localhost" Jan 14 01:10:40.338465 containerd[1661]: 2026-01-14 01:10:40.157 [INFO][5323] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" host="localhost" Jan 14 01:10:40.338465 containerd[1661]: 2026-01-14 01:10:40.157 [INFO][5323] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" host="localhost" Jan 14 01:10:40.338465 containerd[1661]: 2026-01-14 01:10:40.161 [INFO][5323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:40.338465 containerd[1661]: 2026-01-14 01:10:40.161 [INFO][5323] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" HandleID="k8s-pod-network.071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Workload="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" Jan 14 01:10:40.338585 containerd[1661]: 2026-01-14 01:10:40.192 [INFO][5285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mwb9m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69e53f634ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:40.344197 containerd[1661]: 2026-01-14 01:10:40.192 [INFO][5285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" Jan 14 01:10:40.344197 containerd[1661]: 2026-01-14 01:10:40.192 [INFO][5285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69e53f634ef ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" Jan 14 01:10:40.344197 containerd[1661]: 2026-01-14 01:10:40.215 [INFO][5285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" Jan 14 01:10:40.344340 containerd[1661]: 2026-01-14 01:10:40.216 [INFO][5285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95", Pod:"coredns-674b8bbfcf-mwb9m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69e53f634ef", MAC:"a6:14:66:3e:0c:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:40.344340 containerd[1661]: 2026-01-14 01:10:40.296 [INFO][5285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" Namespace="kube-system" Pod="coredns-674b8bbfcf-mwb9m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mwb9m-eth0" Jan 14 01:10:40.384000 audit[5354]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:40.387121 kernel: kauditd_printk_skb: 135 callbacks suppressed Jan 14 01:10:40.387214 kernel: audit: type=1325 audit(1768353040.384:678): table=filter:121 family=2 entries=20 op=nft_register_rule pid=5354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:40.384000 audit[5354]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff926c0980 a2=0 a3=7fff926c096c items=0 ppid=3100 pid=5354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:40.389005 kernel: audit: type=1300 audit(1768353040.384:678): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff926c0980 a2=0 a3=7fff926c096c items=0 ppid=3100 pid=5354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.389061 kernel: audit: type=1327 audit(1768353040.384:678): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:40.464000 audit[5354]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:40.464000 audit[5354]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff926c0980 a2=0 a3=0 items=0 ppid=3100 pid=5354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:40.469957 kernel: audit: type=1325 audit(1768353040.464:679): table=nat:122 family=2 entries=14 op=nft_register_rule pid=5354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:40.470003 kernel: audit: type=1300 audit(1768353040.464:679): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff926c0980 a2=0 a3=0 items=0 ppid=3100 pid=5354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.470050 kernel: audit: type=1327 audit(1768353040.464:679): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:40.671000 audit: BPF prog-id=203 op=LOAD Jan 14 01:10:40.671000 audit[5369]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24cb8630 a2=98 a3=0 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.743105 kernel: audit: type=1334 audit(1768353040.671:680): prog-id=203 op=LOAD Jan 14 01:10:40.743167 kernel: audit: type=1300 audit(1768353040.671:680): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24cb8630 a2=98 a3=0 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.743217 kernel: audit: type=1327 audit(1768353040.671:680): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.743247 kernel: audit: type=1334 audit(1768353040.671:681): prog-id=203 op=UNLOAD Jan 14 01:10:40.671000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.671000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:10:40.671000 audit[5369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc24cb8600 a3=0 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.671000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.672000 audit: BPF prog-id=204 op=LOAD Jan 14 01:10:40.672000 audit[5369]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24cb8440 a2=94 a3=54428f items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.672000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:10:40.672000 audit[5369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc24cb8440 a2=94 a3=54428f items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.672000 audit: BPF prog-id=205 op=LOAD Jan 14 01:10:40.672000 audit[5369]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24cb8470 a2=94 a3=2 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.672000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:10:40.672000 audit[5369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc24cb8470 a2=0 a3=2 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.673000 audit: BPF prog-id=206 op=LOAD Jan 14 01:10:40.673000 audit[5369]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc24cb8220 a2=94 a3=4 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.673000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.673000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:10:40.673000 audit[5369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc24cb8220 a2=94 a3=4 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.673000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.673000 audit: BPF prog-id=207 op=LOAD Jan 14 01:10:40.673000 audit[5369]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc24cb8320 a2=94 a3=7ffc24cb84a0 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.673000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.673000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:10:40.673000 audit[5369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc24cb8320 a2=0 a3=7ffc24cb84a0 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.673000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.708000 audit: BPF prog-id=208 op=LOAD Jan 14 01:10:40.708000 audit[5369]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc24cb7a50 a2=94 a3=2 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.708000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.708000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:10:40.708000 audit[5369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc24cb7a50 a2=0 a3=2 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.708000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.708000 audit: BPF prog-id=209 op=LOAD Jan 14 01:10:40.708000 audit[5369]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc24cb7b50 a2=94 a3=30 items=0 ppid=5017 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.708000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:10:40.991509 containerd[1661]: time="2026-01-14T01:10:40.990598757Z" level=info msg="connecting to shim 071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95" address="unix:///run/containerd/s/492c79083a28a32c4535cf25c0da226f926afdb0bc613fd6cf98a3a4ff1d08fc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:41.002000 audit: BPF prog-id=210 op=LOAD Jan 14 01:10:41.002000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca5051170 a2=98 a3=0 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.002000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:41.002000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:10:41.002000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffca5051140 a3=0 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.002000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:41.003000 audit: BPF prog-id=211 op=LOAD Jan 14 01:10:41.003000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca5050f60 a2=94 a3=54428f items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.003000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:41.003000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:10:41.003000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffca5050f60 a2=94 a3=54428f items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.003000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:41.003000 audit: BPF prog-id=212 op=LOAD Jan 14 01:10:41.003000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca5050f90 a2=94 a3=2 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.003000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:41.003000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:10:41.003000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffca5050f90 a2=0 a3=2 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.003000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:41.062487 systemd-networkd[1424]: cali785331c64da: Link UP Jan 14 01:10:41.070259 systemd-networkd[1424]: cali785331c64da: Gained carrier Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:39.589 [INFO][5298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pvf55-eth0 coredns-674b8bbfcf- kube-system 6b7ab4e1-8df7-452b-9e94-dfd2290c9d55 1017 0 2026-01-14 01:08:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pvf55 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali785331c64da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:39.590 [INFO][5298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:39.909 [INFO][5336] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" HandleID="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Workload="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:39.913 [INFO][5336] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" HandleID="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Workload="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pvf55", "timestamp":"2026-01-14 01:10:39.909566493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:39.915 [INFO][5336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.161 [INFO][5336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.166 [INFO][5336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.302 [INFO][5336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.470 [INFO][5336] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.642 [INFO][5336] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.718 [INFO][5336] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.829 [INFO][5336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.829 [INFO][5336] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.848 [INFO][5336] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:40.983 [INFO][5336] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:41.022 [INFO][5336] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:41.031 [INFO][5336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" host="localhost" Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:41.032 [INFO][5336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:41.165171 containerd[1661]: 2026-01-14 01:10:41.032 [INFO][5336] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" HandleID="k8s-pod-network.ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Workload="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" Jan 14 01:10:41.167150 containerd[1661]: 2026-01-14 01:10:41.040 [INFO][5298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pvf55-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b7ab4e1-8df7-452b-9e94-dfd2290c9d55", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pvf55", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali785331c64da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:41.167150 containerd[1661]: 2026-01-14 01:10:41.041 [INFO][5298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" Jan 14 01:10:41.167150 containerd[1661]: 2026-01-14 01:10:41.041 [INFO][5298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali785331c64da ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" Jan 14 01:10:41.167150 containerd[1661]: 2026-01-14 01:10:41.072 [INFO][5298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" Jan 14 01:10:41.167150 containerd[1661]: 2026-01-14 01:10:41.079 [INFO][5298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pvf55-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b7ab4e1-8df7-452b-9e94-dfd2290c9d55", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a", Pod:"coredns-674b8bbfcf-pvf55", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali785331c64da", MAC:"1e:75:72:aa:84:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:41.167150 containerd[1661]: 2026-01-14 01:10:41.147 [INFO][5298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" Namespace="kube-system" Pod="coredns-674b8bbfcf-pvf55" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pvf55-eth0" Jan 14 01:10:41.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.95:22-10.0.0.1:50156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.233481 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:50156.service - OpenSSH per-connection server daemon (10.0.0.1:50156). Jan 14 01:10:41.515594 systemd[1]: Started cri-containerd-071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95.scope - libcontainer container 071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95. Jan 14 01:10:41.597019 containerd[1661]: time="2026-01-14T01:10:41.595218527Z" level=info msg="connecting to shim ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a" address="unix:///run/containerd/s/68b78d82349e0b4ae1ba5c24a3dab872b372d857ad446744b1e5f4d91ce1fdc2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:41.645473 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Jan 14 01:10:41.669000 audit: BPF prog-id=213 op=LOAD Jan 14 01:10:41.680000 audit: BPF prog-id=214 op=LOAD Jan 14 01:10:41.680000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316437396338646331663032626636636432326530613839356165 Jan 14 01:10:41.681000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:10:41.681000 audit[5410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316437396338646331663032626636636432326530613839356165 Jan 14 01:10:41.681000 audit: BPF prog-id=215 op=LOAD Jan 14 01:10:41.681000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316437396338646331663032626636636432326530613839356165 Jan 14 01:10:41.681000 audit: BPF prog-id=216 op=LOAD Jan 14 01:10:41.681000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316437396338646331663032626636636432326530613839356165 Jan 14 01:10:41.681000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:10:41.681000 audit[5410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316437396338646331663032626636636432326530613839356165 Jan 14 01:10:41.681000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:10:41.681000 audit[5410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316437396338646331663032626636636432326530613839356165 Jan 14 01:10:41.681000 audit: BPF prog-id=217 op=LOAD Jan 14 01:10:41.681000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316437396338646331663032626636636432326530613839356165 Jan 14 01:10:41.714000 audit[5408]: USER_ACCT pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.733351 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 50156 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:41.735000 audit[5408]: CRED_ACQ pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.735000 audit[5408]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe683a4e80 a2=3 a3=0 items=0 ppid=1 pid=5408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.735000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:41.738490 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:41.748586 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:41.774044 systemd-logind[1635]: New session 13 of user core. Jan 14 01:10:41.779139 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:10:41.819000 audit[5408]: USER_START pid=5408 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.828000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:42.050043 systemd[1]: Started cri-containerd-ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a.scope - libcontainer container ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a. Jan 14 01:10:42.174000 audit: BPF prog-id=218 op=LOAD Jan 14 01:10:42.174000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca5050e50 a2=94 a3=1 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.174000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:10:42.174000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffca5050e50 a2=94 a3=1 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.174000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.211295 containerd[1661]: time="2026-01-14T01:10:42.211247830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwb9m,Uid:c0d00d41-fb7b-4bcd-a0ae-5d87830fc77e,Namespace:kube-system,Attempt:0,} returns sandbox id \"071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95\"" Jan 14 01:10:42.227000 audit: BPF prog-id=219 op=LOAD Jan 14 01:10:42.230000 audit: BPF prog-id=220 op=LOAD Jan 14 01:10:42.230000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffca5050e40 a2=94 a3=4 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.230000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.231000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:10:42.231000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffca5050e40 a2=0 a3=4 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.231000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.232000 audit: BPF prog-id=221 op=LOAD Jan 14 01:10:42.231970 systemd-networkd[1424]: cali69e53f634ef: Gained IPv6LL Jan 14 01:10:42.232000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffca5050ca0 a2=94 a3=5 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.232000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.233000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:10:42.233000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffca5050ca0 a2=0 a3=5 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.233000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.234000 audit: BPF prog-id=222 op=LOAD Jan 14 01:10:42.234000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffca5050ec0 a2=94 a3=6 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.234000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.234000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:10:42.234000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffca5050ec0 a2=0 a3=6 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.234000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.235000 audit: BPF prog-id=223 op=LOAD Jan 14 01:10:42.235000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffca5050670 a2=94 a3=88 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.235000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.237000 audit: BPF prog-id=224 op=LOAD Jan 14 01:10:42.237000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffca50504f0 a2=94 a3=2 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.237000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:10:42.237000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffca5050520 a2=0 a3=7ffca5050620 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.239000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:10:42.239000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2a495d10 a2=0 a3=4ecc6ae2b3727080 items=0 ppid=5017 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:10:42.247023 kubelet[2949]: E0114 01:10:42.246995 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:42.247000 audit: BPF prog-id=225 op=LOAD Jan 14 01:10:42.247000 audit[5453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5440 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.284993 containerd[1661]: time="2026-01-14T01:10:42.282541297Z" level=info msg="CreateContainer within sandbox \"071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:10:42.292392 systemd-networkd[1424]: cali785331c64da: Gained IPv6LL Jan 14 01:10:42.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562646535656464616232303962323836336565396461363937393165 Jan 14 01:10:42.293000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:10:42.293000 audit[5453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5440 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562646535656464616232303962323836336565396461363937393165 Jan 14 01:10:42.299000 audit: BPF prog-id=226 op=LOAD Jan 14 01:10:42.299000 audit[5453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5440 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562646535656464616232303962323836336565396461363937393165 Jan 14 01:10:42.311000 audit: BPF prog-id=227 op=LOAD Jan 14 01:10:42.311000 audit[5453]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5440 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562646535656464616232303962323836336565396461363937393165 Jan 14 01:10:42.311000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:10:42.311000 audit[5453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5440 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562646535656464616232303962323836336565396461363937393165 Jan 14 01:10:42.311000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:10:42.311000 audit[5453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5440 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562646535656464616232303962323836336565396461363937393165 Jan 14 01:10:42.333000 audit: BPF prog-id=228 op=LOAD Jan 14 01:10:42.333000 audit[5453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5440 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562646535656464616232303962323836336565396461363937393165 Jan 14 01:10:42.359341 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:42.412000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:10:42.412000 audit[5017]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0009622c0 a2=0 a3=0 items=0 ppid=5007 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.412000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:10:42.507400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856380147.mount: Deactivated successfully. Jan 14 01:10:42.546590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675687397.mount: Deactivated successfully. Jan 14 01:10:42.552192 containerd[1661]: time="2026-01-14T01:10:42.548370922Z" level=info msg="Container 1f9050df708f84c19298b600f273db7d77633e29990b038d0bd48c02e747a172: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:10:42.615116 containerd[1661]: time="2026-01-14T01:10:42.614513761Z" level=info msg="CreateContainer within sandbox \"071d79c8dc1f02bf6cd22e0a895ae3099944a0052f6dd8381bd7c329807ebf95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f9050df708f84c19298b600f273db7d77633e29990b038d0bd48c02e747a172\"" Jan 14 01:10:42.623623 containerd[1661]: time="2026-01-14T01:10:42.622303947Z" level=info msg="StartContainer for \"1f9050df708f84c19298b600f273db7d77633e29990b038d0bd48c02e747a172\"" Jan 14 01:10:42.639012 containerd[1661]: time="2026-01-14T01:10:42.638539937Z" level=info msg="connecting to shim 1f9050df708f84c19298b600f273db7d77633e29990b038d0bd48c02e747a172" address="unix:///run/containerd/s/492c79083a28a32c4535cf25c0da226f926afdb0bc613fd6cf98a3a4ff1d08fc" protocol=ttrpc version=3 Jan 14 01:10:42.801520 sshd[5450]: Connection closed by 10.0.0.1 port 50156 Jan 14 01:10:42.802583 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:42.809000 audit[5408]: USER_END pid=5408 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:42.810000 audit[5408]: CRED_DISP pid=5408 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:42.822202 containerd[1661]: time="2026-01-14T01:10:42.818453317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pvf55,Uid:6b7ab4e1-8df7-452b-9e94-dfd2290c9d55,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a\"" Jan 14 01:10:42.860106 kubelet[2949]: E0114 01:10:42.857351 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:42.887499 containerd[1661]: time="2026-01-14T01:10:42.883226045Z" level=info msg="CreateContainer within sandbox \"ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:10:42.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.95:22-10.0.0.1:50156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.906195 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:50156.service: Deactivated successfully. Jan 14 01:10:42.919293 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:10:42.939271 systemd-logind[1635]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:10:42.972263 systemd[1]: Started cri-containerd-1f9050df708f84c19298b600f273db7d77633e29990b038d0bd48c02e747a172.scope - libcontainer container 1f9050df708f84c19298b600f273db7d77633e29990b038d0bd48c02e747a172. Jan 14 01:10:42.983803 containerd[1661]: time="2026-01-14T01:10:42.983518642Z" level=info msg="Container e3614a8a3de193abd17bbb066fef7b5fe047c6868cbf55fc2fb12081f8bceaa1: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:10:42.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.95:22-10.0.0.1:38596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.985502 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:38596.service - OpenSSH per-connection server daemon (10.0.0.1:38596). Jan 14 01:10:43.016135 systemd-logind[1635]: Removed session 13. Jan 14 01:10:43.088243 containerd[1661]: time="2026-01-14T01:10:43.083315098Z" level=info msg="CreateContainer within sandbox \"ebde5eddab209b2863ee9da69791ef36ed730ae71567ef2bba4a77539d8fff3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e3614a8a3de193abd17bbb066fef7b5fe047c6868cbf55fc2fb12081f8bceaa1\"" Jan 14 01:10:43.088243 containerd[1661]: time="2026-01-14T01:10:43.085290918Z" level=info msg="StartContainer for \"e3614a8a3de193abd17bbb066fef7b5fe047c6868cbf55fc2fb12081f8bceaa1\"" Jan 14 01:10:43.124974 containerd[1661]: time="2026-01-14T01:10:43.118285212Z" level=info msg="connecting to shim e3614a8a3de193abd17bbb066fef7b5fe047c6868cbf55fc2fb12081f8bceaa1" address="unix:///run/containerd/s/68b78d82349e0b4ae1ba5c24a3dab872b372d857ad446744b1e5f4d91ce1fdc2" protocol=ttrpc version=3 Jan 14 01:10:43.139000 audit: BPF prog-id=229 op=LOAD Jan 14 01:10:43.183000 audit: BPF prog-id=230 op=LOAD Jan 14 01:10:43.183000 audit[5493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5387 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393035306466373038663834633139323938623630306632373364 Jan 14 01:10:43.186000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:10:43.186000 audit[5493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393035306466373038663834633139323938623630306632373364 Jan 14 01:10:43.186000 audit: BPF prog-id=231 op=LOAD Jan 14 01:10:43.186000 audit[5493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5387 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393035306466373038663834633139323938623630306632373364 Jan 14 01:10:43.191000 audit: BPF prog-id=232 op=LOAD Jan 14 01:10:43.191000 audit[5493]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5387 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393035306466373038663834633139323938623630306632373364 Jan 14 01:10:43.191000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:10:43.191000 audit[5493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393035306466373038663834633139323938623630306632373364 Jan 14 01:10:43.192000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:10:43.192000 audit[5493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393035306466373038663834633139323938623630306632373364 Jan 14 01:10:43.192000 audit: BPF prog-id=233 op=LOAD Jan 14 01:10:43.192000 audit[5493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5387 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393035306466373038663834633139323938623630306632373364 Jan 14 01:10:43.244000 audit[5521]: USER_ACCT pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:43.251000 audit[5521]: CRED_ACQ pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:43.252000 audit[5521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc190e2190 a2=3 a3=0 items=0 ppid=1 pid=5521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.252000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:43.254264 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 38596 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:43.256229 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:43.294542 systemd-logind[1635]: New session 14 of user core. Jan 14 01:10:43.314459 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:10:43.376000 audit[5521]: USER_START pid=5521 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:43.397336 systemd[1]: Started cri-containerd-e3614a8a3de193abd17bbb066fef7b5fe047c6868cbf55fc2fb12081f8bceaa1.scope - libcontainer container e3614a8a3de193abd17bbb066fef7b5fe047c6868cbf55fc2fb12081f8bceaa1. Jan 14 01:10:43.403000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:43.417615 containerd[1661]: time="2026-01-14T01:10:43.417581038Z" level=info msg="StartContainer for \"1f9050df708f84c19298b600f273db7d77633e29990b038d0bd48c02e747a172\" returns successfully" Jan 14 01:10:43.554000 audit: BPF prog-id=234 op=LOAD Jan 14 01:10:43.562000 audit: BPF prog-id=235 op=LOAD Jan 14 01:10:43.562000 audit[5538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5440 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533363134613861336465313933616264313762626230363666656637 Jan 14 01:10:43.562000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:10:43.562000 audit[5538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5440 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533363134613861336465313933616264313762626230363666656637 Jan 14 01:10:43.564000 audit: BPF prog-id=236 op=LOAD Jan 14 01:10:43.564000 audit[5538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5440 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533363134613861336465313933616264313762626230363666656637 Jan 14 01:10:43.564000 audit: BPF prog-id=237 op=LOAD Jan 14 01:10:43.564000 audit[5538]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5440 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533363134613861336465313933616264313762626230363666656637 Jan 14 01:10:43.565000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:10:43.565000 audit[5538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5440 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533363134613861336465313933616264313762626230363666656637 Jan 14 01:10:43.565000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:10:43.565000 audit[5538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5440 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533363134613861336465313933616264313762626230363666656637 Jan 14 01:10:43.565000 audit: BPF prog-id=238 op=LOAD Jan 14 01:10:43.565000 audit[5538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5440 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533363134613861336465313933616264313762626230363666656637 Jan 14 01:10:43.874000 audit[5582]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:43.874000 audit[5582]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd1f47fce0 a2=0 a3=7ffd1f47fccc items=0 ppid=5017 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.874000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:43.887000 audit[5596]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=5596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:43.887000 audit[5596]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcf1ddcdd0 a2=0 a3=7ffcf1ddcdbc items=0 ppid=5017 pid=5596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.887000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:43.965000 audit[5595]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=5595 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:43.965000 audit[5595]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffa2919750 a2=0 a3=7fffa291973c items=0 ppid=5017 pid=5595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:43.965000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:43.971207 containerd[1661]: time="2026-01-14T01:10:43.970547229Z" level=info msg="StartContainer for \"e3614a8a3de193abd17bbb066fef7b5fe047c6868cbf55fc2fb12081f8bceaa1\" returns successfully" Jan 14 01:10:44.126467 kubelet[2949]: E0114 01:10:44.126274 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:44.166956 kubelet[2949]: E0114 01:10:44.164497 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:44.246168 containerd[1661]: time="2026-01-14T01:10:44.245188009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:44.283447 containerd[1661]: time="2026-01-14T01:10:44.282075084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:10:44.283588 kubelet[2949]: I0114 01:10:44.283031 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mwb9m" podStartSLOduration=135.283012554 podStartE2EDuration="2m15.283012554s" podCreationTimestamp="2026-01-14 01:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:10:44.27659881 +0000 UTC m=+136.935046917" watchObservedRunningTime="2026-01-14 01:10:44.283012554 +0000 UTC m=+136.941460640" Jan 14 01:10:44.572997 sshd[5561]: Connection closed by 10.0.0.1 port 38596 Jan 14 01:10:44.574326 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:44.593000 audit[5521]: USER_END pid=5521 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:44.593000 audit[5521]: CRED_DISP pid=5521 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:44.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.95:22-10.0.0.1:38598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:44.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.95:22-10.0.0.1:38596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:44.608332 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:38598.service - OpenSSH per-connection server daemon (10.0.0.1:38598). Jan 14 01:10:44.609506 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:38596.service: Deactivated successfully. Jan 14 01:10:44.619421 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:10:44.636154 systemd-logind[1635]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:10:44.645269 systemd-logind[1635]: Removed session 14. Jan 14 01:10:44.833000 audit[5604]: NETFILTER_CFG table=filter:126 family=2 entries=122 op=nft_register_chain pid=5604 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:44.833000 audit[5604]: SYSCALL arch=c000003e syscall=46 success=yes exit=69792 a0=3 a1=7ffeaf29e130 a2=0 a3=7ffeaf29e11c items=0 ppid=5017 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:44.833000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:45.063000 audit[5650]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:45.063000 audit[5650]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe6d6c660 a2=0 a3=7fffe6d6c64c items=0 ppid=3100 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:45.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:45.079000 audit[5650]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=5650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:45.079000 audit[5650]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffe6d6c660 a2=0 a3=0 items=0 ppid=3100 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:45.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:45.229150 kubelet[2949]: E0114 01:10:45.227130 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:45.229150 kubelet[2949]: E0114 01:10:45.228412 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:45.247297 containerd[1661]: time="2026-01-14T01:10:45.246066631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:45.247000 audit[5635]: USER_ACCT pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:45.252000 audit[5635]: CRED_ACQ pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:45.253000 audit[5635]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffe879ed0 a2=3 a3=0 items=0 ppid=1 pid=5635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:45.253000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:45.260010 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 38598 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:45.260201 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:45.284372 systemd-logind[1635]: New session 15 of user core. Jan 14 01:10:45.287311 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:10:45.312000 audit[5635]: USER_START pid=5635 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:45.323000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:45.660399 kernel: kauditd_printk_skb: 228 callbacks suppressed Jan 14 01:10:45.660612 kernel: audit: type=1325 audit(1768353045.627:774): table=filter:129 family=2 entries=70 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:45.627000 audit[5695]: NETFILTER_CFG table=filter:129 family=2 entries=70 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:45.627000 audit[5695]: SYSCALL arch=c000003e syscall=46 success=yes exit=38808 a0=3 a1=7ffe60367830 a2=0 a3=7ffe6036781c items=0 ppid=5017 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:45.788040 kernel: audit: type=1300 audit(1768353045.627:774): arch=c000003e syscall=46 success=yes exit=38808 a0=3 a1=7ffe60367830 a2=0 a3=7ffe6036781c items=0 ppid=5017 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:45.627000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:45.852348 kernel: audit: type=1327 audit(1768353045.627:774): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:45.992000 audit[5703]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:46.031209 kernel: audit: type=1325 audit(1768353045.992:775): table=filter:130 family=2 entries=20 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:45.992000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd80e674e0 a2=0 a3=7ffd80e674cc items=0 ppid=3100 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:46.101212 kernel: audit: type=1300 audit(1768353045.992:775): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd80e674e0 a2=0 a3=7ffd80e674cc items=0 ppid=3100 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:45.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:46.135214 kernel: audit: type=1327 audit(1768353045.992:775): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:46.103000 audit[5703]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:46.170196 sshd[5662]: Connection closed by 10.0.0.1 port 38598 Jan 14 01:10:46.171066 kernel: audit: type=1325 audit(1768353046.103:776): table=nat:131 family=2 entries=14 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:46.103000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd80e674e0 a2=0 a3=0 items=0 ppid=3100 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:46.201504 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:38598.service: Deactivated successfully. Jan 14 01:10:46.172499 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:46.205054 systemd-logind[1635]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:10:46.215415 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:10:46.232592 systemd-logind[1635]: Removed session 15. Jan 14 01:10:46.269614 kernel: audit: type=1300 audit(1768353046.103:776): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd80e674e0 a2=0 a3=0 items=0 ppid=3100 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:46.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:46.294591 containerd[1661]: time="2026-01-14T01:10:46.291494366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,}" Jan 14 01:10:46.298569 kubelet[2949]: E0114 01:10:46.297607 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:46.314800 kernel: audit: type=1327 audit(1768353046.103:776): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:46.315027 kernel: audit: type=1106 audit(1768353046.181:777): pid=5635 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:46.181000 audit[5635]: USER_END pid=5635 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:46.181000 audit[5635]: CRED_DISP pid=5635 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:46.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.95:22-10.0.0.1:38598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:46.711358 systemd-networkd[1424]: caliee4503b40ac: Link UP Jan 14 01:10:46.717618 systemd-networkd[1424]: caliee4503b40ac: Gained carrier Jan 14 01:10:46.804250 kubelet[2949]: I0114 01:10:46.799610 2949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pvf55" podStartSLOduration=137.799588956 podStartE2EDuration="2m17.799588956s" podCreationTimestamp="2026-01-14 01:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:10:44.543515201 +0000 UTC m=+137.201963318" watchObservedRunningTime="2026-01-14 01:10:46.799588956 +0000 UTC m=+139.458037043" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:45.196 [INFO][5605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0 calico-apiserver-57c9c7ff47- calico-apiserver 73c10481-1af3-4a40-9a8f-b16adcb34162 1028 0 2026-01-14 01:09:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57c9c7ff47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57c9c7ff47-ct9w8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee4503b40ac [] [] }} ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:45.196 [INFO][5605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:45.953 [INFO][5664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" HandleID="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Workload="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:45.956 [INFO][5664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" HandleID="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Workload="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000394c10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57c9c7ff47-ct9w8", "timestamp":"2026-01-14 01:10:45.953516712 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:45.956 [INFO][5664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:45.956 [INFO][5664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:45.956 [INFO][5664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.067 [INFO][5664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.116 [INFO][5664] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.303 [INFO][5664] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.363 [INFO][5664] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.383 [INFO][5664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.383 [INFO][5664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.453 [INFO][5664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7 Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.549 [INFO][5664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.639 [INFO][5664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.640 [INFO][5664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" host="localhost" Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.644 [INFO][5664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:46.831346 containerd[1661]: 2026-01-14 01:10:46.646 [INFO][5664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" HandleID="k8s-pod-network.8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Workload="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" Jan 14 01:10:46.853159 containerd[1661]: 2026-01-14 01:10:46.680 [INFO][5605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0", GenerateName:"calico-apiserver-57c9c7ff47-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c10481-1af3-4a40-9a8f-b16adcb34162", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c9c7ff47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57c9c7ff47-ct9w8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee4503b40ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:46.853159 containerd[1661]: 2026-01-14 01:10:46.681 [INFO][5605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" Jan 14 01:10:46.853159 containerd[1661]: 2026-01-14 01:10:46.681 [INFO][5605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee4503b40ac ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" Jan 14 01:10:46.853159 containerd[1661]: 2026-01-14 01:10:46.720 [INFO][5605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" Jan 14 01:10:46.853159 containerd[1661]: 2026-01-14 01:10:46.733 [INFO][5605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0", GenerateName:"calico-apiserver-57c9c7ff47-", Namespace:"calico-apiserver", SelfLink:"", UID:"73c10481-1af3-4a40-9a8f-b16adcb34162", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c9c7ff47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7", Pod:"calico-apiserver-57c9c7ff47-ct9w8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee4503b40ac", MAC:"62:57:d1:97:b3:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:46.853159 containerd[1661]: 2026-01-14 01:10:46.808 [INFO][5605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-ct9w8" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--ct9w8-eth0" Jan 14 01:10:47.248416 systemd-networkd[1424]: calia3c53bf0239: Link UP Jan 14 01:10:47.262392 systemd-networkd[1424]: calia3c53bf0239: Gained carrier Jan 14 01:10:47.432000 audit[5749]: NETFILTER_CFG table=filter:132 family=2 entries=62 op=nft_register_chain pid=5749 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:47.432000 audit[5749]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffdde95eed0 a2=0 a3=7ffdde95eebc items=0 ppid=5017 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:47.432000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:47.506630 containerd[1661]: time="2026-01-14T01:10:47.504969629Z" level=info msg="connecting to shim 8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7" address="unix:///run/containerd/s/1541c621ffe0f76b247eef7fab4b75f810ca10253a643a91a216d78ce304f92e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:45.283 [INFO][5610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0 calico-apiserver-57c9c7ff47- calico-apiserver e7d0a51e-3dc4-4308-8f17-61e1305f307f 1029 0 2026-01-14 01:09:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57c9c7ff47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57c9c7ff47-drdgg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3c53bf0239 [] [] }} ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:45.284 [INFO][5610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.134 [INFO][5671] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" HandleID="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Workload="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.134 [INFO][5671] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" HandleID="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Workload="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000473370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57c9c7ff47-drdgg", "timestamp":"2026-01-14 01:10:46.134329635 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.134 [INFO][5671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.641 [INFO][5671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.641 [INFO][5671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.750 [INFO][5671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.816 [INFO][5671] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.957 [INFO][5671] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:46.980 [INFO][5671] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.017 [INFO][5671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.029 [INFO][5671] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.067 [INFO][5671] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0 Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.109 [INFO][5671] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.173 [INFO][5671] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.173 [INFO][5671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" host="localhost" Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.173 [INFO][5671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:47.532242 containerd[1661]: 2026-01-14 01:10:47.173 [INFO][5671] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" HandleID="k8s-pod-network.a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Workload="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" Jan 14 01:10:47.534289 containerd[1661]: 2026-01-14 01:10:47.237 [INFO][5610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0", GenerateName:"calico-apiserver-57c9c7ff47-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7d0a51e-3dc4-4308-8f17-61e1305f307f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c9c7ff47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57c9c7ff47-drdgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c53bf0239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:47.534289 containerd[1661]: 2026-01-14 01:10:47.237 [INFO][5610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" Jan 14 01:10:47.534289 containerd[1661]: 2026-01-14 01:10:47.237 [INFO][5610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3c53bf0239 ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" Jan 14 01:10:47.534289 containerd[1661]: 2026-01-14 01:10:47.274 [INFO][5610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" Jan 14 01:10:47.534289 containerd[1661]: 2026-01-14 01:10:47.309 [INFO][5610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0", GenerateName:"calico-apiserver-57c9c7ff47-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7d0a51e-3dc4-4308-8f17-61e1305f307f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c9c7ff47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0", Pod:"calico-apiserver-57c9c7ff47-drdgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3c53bf0239", MAC:"a6:6e:4f:35:cf:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:47.534289 containerd[1661]: 2026-01-14 01:10:47.414 [INFO][5610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" Namespace="calico-apiserver" Pod="calico-apiserver-57c9c7ff47-drdgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c9c7ff47--drdgg-eth0" Jan 14 01:10:47.772288 systemd[1]: Started cri-containerd-8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7.scope - libcontainer container 8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7. Jan 14 01:10:47.853266 containerd[1661]: time="2026-01-14T01:10:47.852264317Z" level=info msg="connecting to shim a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0" address="unix:///run/containerd/s/164ba5140b163f44ad924427232b68416a678c86a0404bf446328287358d3f86" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:47.881561 systemd-networkd[1424]: cali03867e6a77a: Link UP Jan 14 01:10:47.889179 systemd-networkd[1424]: cali03867e6a77a: Gained carrier Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:45.936 [INFO][5668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--mrnrg-eth0 goldmane-666569f655- calico-system 0be5353a-35d3-4a4f-8ef3-74707ad90bb4 1024 0 2026-01-14 01:09:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-mrnrg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali03867e6a77a [] [] }} ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:45.945 [INFO][5668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-eth0" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:46.667 [INFO][5707] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" HandleID="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Workload="localhost-k8s-goldmane--666569f655--mrnrg-eth0" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:46.669 [INFO][5707] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" HandleID="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Workload="localhost-k8s-goldmane--666569f655--mrnrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f73e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-mrnrg", "timestamp":"2026-01-14 01:10:46.667319701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:46.669 [INFO][5707] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.174 [INFO][5707] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.175 [INFO][5707] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.222 [INFO][5707] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.362 [INFO][5707] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.441 [INFO][5707] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.467 [INFO][5707] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.527 [INFO][5707] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.528 [INFO][5707] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.557 [INFO][5707] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9 Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.598 [INFO][5707] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.650 [INFO][5707] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.653 [INFO][5707] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" host="localhost" Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.659 [INFO][5707] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:48.011107 containerd[1661]: 2026-01-14 01:10:47.660 [INFO][5707] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" HandleID="k8s-pod-network.794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Workload="localhost-k8s-goldmane--666569f655--mrnrg-eth0" Jan 14 01:10:48.017589 containerd[1661]: 2026-01-14 01:10:47.820 [INFO][5668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mrnrg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0be5353a-35d3-4a4f-8ef3-74707ad90bb4", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-mrnrg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03867e6a77a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:48.017589 containerd[1661]: 2026-01-14 01:10:47.820 [INFO][5668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-eth0" Jan 14 01:10:48.017589 containerd[1661]: 2026-01-14 01:10:47.820 [INFO][5668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03867e6a77a ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-eth0" Jan 14 01:10:48.017589 containerd[1661]: 2026-01-14 01:10:47.894 [INFO][5668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-eth0" Jan 14 01:10:48.017589 containerd[1661]: 2026-01-14 01:10:47.899 [INFO][5668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mrnrg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0be5353a-35d3-4a4f-8ef3-74707ad90bb4", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9", Pod:"goldmane-666569f655-mrnrg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03867e6a77a", MAC:"92:bd:00:59:e2:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:48.017589 containerd[1661]: 2026-01-14 01:10:47.937 [INFO][5668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" Namespace="calico-system" Pod="goldmane-666569f655-mrnrg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mrnrg-eth0" Jan 14 01:10:48.137309 systemd-networkd[1424]: cali3d0d03fff12: Link UP Jan 14 01:10:48.143443 systemd-networkd[1424]: cali3d0d03fff12: Gained carrier Jan 14 01:10:48.244400 systemd-networkd[1424]: caliee4503b40ac: Gained IPv6LL Jan 14 01:10:48.254206 containerd[1661]: time="2026-01-14T01:10:48.251510743Z" level=info msg="connecting to shim 794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9" address="unix:///run/containerd/s/1eeb38bd1093e8f8391b24e90625a257408b994b92e85e125ed12d417cd29377" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:48.265000 audit: BPF prog-id=239 op=LOAD Jan 14 01:10:48.272000 audit[5824]: NETFILTER_CFG table=filter:133 family=2 entries=53 op=nft_register_chain pid=5824 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:48.272000 audit[5824]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7ffd94060110 a2=0 a3=7ffd940600fc items=0 ppid=5017 pid=5824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.272000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:48.274000 audit: BPF prog-id=240 op=LOAD Jan 14 01:10:48.274000 audit[5781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5763 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323533333365323966616264643035663936373136653636316464 Jan 14 01:10:48.274000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:10:48.274000 audit[5781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5763 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323533333365323966616264643035663936373136653636316464 Jan 14 01:10:48.276000 audit: BPF prog-id=241 op=LOAD Jan 14 01:10:48.276000 audit[5781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5763 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323533333365323966616264643035663936373136653636316464 Jan 14 01:10:48.276000 audit: BPF prog-id=242 op=LOAD Jan 14 01:10:48.276000 audit[5781]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5763 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323533333365323966616264643035663936373136653636316464 Jan 14 01:10:48.278000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:10:48.278000 audit[5781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5763 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323533333365323966616264643035663936373136653636316464 Jan 14 01:10:48.278000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:10:48.278000 audit[5781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5763 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323533333365323966616264643035663936373136653636316464 Jan 14 01:10:48.278000 audit: BPF prog-id=243 op=LOAD Jan 14 01:10:48.278000 audit[5781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5763 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831323533333365323966616264643035663936373136653636316464 Jan 14 01:10:48.307015 systemd[1]: Started cri-containerd-a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0.scope - libcontainer container a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0. Jan 14 01:10:48.310392 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:46.866 [INFO][5718] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tbvx7-eth0 csi-node-driver- calico-system 1036b5d9-9d65-4e70-adc3-802295ee7a1e 881 0 2026-01-14 01:09:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tbvx7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3d0d03fff12 [] [] }} ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:46.867 [INFO][5718] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-eth0" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.557 [INFO][5741] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" HandleID="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Workload="localhost-k8s-csi--node--driver--tbvx7-eth0" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.564 [INFO][5741] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" HandleID="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Workload="localhost-k8s-csi--node--driver--tbvx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tbvx7", "timestamp":"2026-01-14 01:10:47.557403327 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.567 [INFO][5741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.672 [INFO][5741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.692 [INFO][5741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.806 [INFO][5741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.859 [INFO][5741] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.908 [INFO][5741] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.928 [INFO][5741] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.964 [INFO][5741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.967 [INFO][5741] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:47.983 [INFO][5741] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055 Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:48.029 [INFO][5741] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:48.073 [INFO][5741] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:48.073 [INFO][5741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" host="localhost" Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:48.073 [INFO][5741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:10:48.381561 containerd[1661]: 2026-01-14 01:10:48.073 [INFO][5741] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" HandleID="k8s-pod-network.2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Workload="localhost-k8s-csi--node--driver--tbvx7-eth0" Jan 14 01:10:48.389279 containerd[1661]: 2026-01-14 01:10:48.110 [INFO][5718] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbvx7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1036b5d9-9d65-4e70-adc3-802295ee7a1e", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tbvx7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0d03fff12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:48.389279 containerd[1661]: 2026-01-14 01:10:48.112 [INFO][5718] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-eth0" Jan 14 01:10:48.389279 containerd[1661]: 2026-01-14 01:10:48.112 [INFO][5718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d0d03fff12 ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-eth0" Jan 14 01:10:48.389279 containerd[1661]: 2026-01-14 01:10:48.141 [INFO][5718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-eth0" Jan 14 01:10:48.389279 containerd[1661]: 2026-01-14 01:10:48.163 [INFO][5718] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbvx7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1036b5d9-9d65-4e70-adc3-802295ee7a1e", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055", Pod:"csi-node-driver-tbvx7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0d03fff12", MAC:"0a:75:52:2f:d0:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:10:48.389279 containerd[1661]: 2026-01-14 01:10:48.268 [INFO][5718] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" Namespace="calico-system" Pod="csi-node-driver-tbvx7" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbvx7-eth0" Jan 14 01:10:48.691000 audit: BPF prog-id=244 op=LOAD Jan 14 01:10:48.699000 audit: BPF prog-id=245 op=LOAD Jan 14 01:10:48.699000 audit[5823]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5803 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138316462666630383835383139666262303632643662613662313637 Jan 14 01:10:48.700000 audit: BPF prog-id=245 op=UNLOAD Jan 14 01:10:48.700000 audit[5823]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5803 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138316462666630383835383139666262303632643662613662313637 Jan 14 01:10:48.700000 audit: BPF prog-id=246 op=LOAD Jan 14 01:10:48.700000 audit[5823]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5803 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138316462666630383835383139666262303632643662613662313637 Jan 14 01:10:48.700000 audit: BPF prog-id=247 op=LOAD Jan 14 01:10:48.700000 audit[5823]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5803 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138316462666630383835383139666262303632643662613662313637 Jan 14 01:10:48.700000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:10:48.700000 audit[5823]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5803 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138316462666630383835383139666262303632643662613662313637 Jan 14 01:10:48.700000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:10:48.700000 audit[5823]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5803 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138316462666630383835383139666262303632643662613662313637 Jan 14 01:10:48.700000 audit: BPF prog-id=248 op=LOAD Jan 14 01:10:48.700000 audit[5823]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5803 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138316462666630383835383139666262303632643662613662313637 Jan 14 01:10:48.766295 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:48.783561 systemd[1]: Started cri-containerd-794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9.scope - libcontainer container 794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9. Jan 14 01:10:48.837000 audit[5905]: NETFILTER_CFG table=filter:134 family=2 entries=92 op=nft_register_chain pid=5905 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:10:48.837000 audit[5905]: SYSCALL arch=c000003e syscall=46 success=yes exit=47796 a0=3 a1=7fff97d25d40 a2=0 a3=7fff97d25d2c items=0 ppid=5017 pid=5905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:48.837000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:10:48.857156 containerd[1661]: time="2026-01-14T01:10:48.856446451Z" level=info msg="connecting to shim 2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055" address="unix:///run/containerd/s/998e4656be8e8f33373be50c2ccbbf5fced9d5f18c6fc86f43b94431f5321388" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:10:48.880337 systemd-networkd[1424]: calia3c53bf0239: Gained IPv6LL Jan 14 01:10:49.014288 containerd[1661]: time="2026-01-14T01:10:49.014243385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-ct9w8,Uid:73c10481-1af3-4a40-9a8f-b16adcb34162,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8125333e29fabdd05f96716e661ddf9bf453120de09c9f3384b4faca3619a3d7\"" Jan 14 01:10:49.023160 containerd[1661]: time="2026-01-14T01:10:49.023124377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:10:49.108000 audit: BPF prog-id=249 op=LOAD Jan 14 01:10:49.134425 systemd[1]: Started cri-containerd-2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055.scope - libcontainer container 2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055. Jan 14 01:10:49.137000 audit: BPF prog-id=250 op=LOAD Jan 14 01:10:49.137000 audit[5876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5850 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346436643666666564376336333639363533613762323332343462 Jan 14 01:10:49.137000 audit: BPF prog-id=250 op=UNLOAD Jan 14 01:10:49.137000 audit[5876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5850 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346436643666666564376336333639363533613762323332343462 Jan 14 01:10:49.147000 audit: BPF prog-id=251 op=LOAD Jan 14 01:10:49.147000 audit[5876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5850 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346436643666666564376336333639363533613762323332343462 Jan 14 01:10:49.148000 audit: BPF prog-id=252 op=LOAD Jan 14 01:10:49.148000 audit[5876]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5850 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346436643666666564376336333639363533613762323332343462 Jan 14 01:10:49.151000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:10:49.151000 audit[5876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5850 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346436643666666564376336333639363533613762323332343462 Jan 14 01:10:49.152000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:10:49.152000 audit[5876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5850 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346436643666666564376336333639363533613762323332343462 Jan 14 01:10:49.152000 audit: BPF prog-id=253 op=LOAD Jan 14 01:10:49.152000 audit[5876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5850 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346436643666666564376336333639363533613762323332343462 Jan 14 01:10:49.182540 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:49.309000 audit: BPF prog-id=254 op=LOAD Jan 14 01:10:49.315000 audit: BPF prog-id=255 op=LOAD Jan 14 01:10:49.315000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5900 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234363465623038306565643862613635623066313038623538656161 Jan 14 01:10:49.315000 audit: BPF prog-id=255 op=UNLOAD Jan 14 01:10:49.315000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5900 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234363465623038306565643862613635623066313038623538656161 Jan 14 01:10:49.317000 audit: BPF prog-id=256 op=LOAD Jan 14 01:10:49.317000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5900 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234363465623038306565643862613635623066313038623538656161 Jan 14 01:10:49.317000 audit: BPF prog-id=257 op=LOAD Jan 14 01:10:49.317000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5900 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234363465623038306565643862613635623066313038623538656161 Jan 14 01:10:49.318000 audit: BPF prog-id=257 op=UNLOAD Jan 14 01:10:49.318000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5900 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234363465623038306565643862613635623066313038623538656161 Jan 14 01:10:49.319000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:10:49.319000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5900 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234363465623038306565643862613635623066313038623538656161 Jan 14 01:10:49.319000 audit: BPF prog-id=258 op=LOAD Jan 14 01:10:49.319000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5900 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234363465623038306565643862613635623066313038623538656161 Jan 14 01:10:49.363256 containerd[1661]: time="2026-01-14T01:10:49.362632025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:49.379388 containerd[1661]: time="2026-01-14T01:10:49.374540098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:10:49.380537 containerd[1661]: time="2026-01-14T01:10:49.380441489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:49.384059 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:10:49.385574 kubelet[2949]: E0114 01:10:49.384133 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:49.385574 kubelet[2949]: E0114 01:10:49.384186 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:49.385574 kubelet[2949]: E0114 01:10:49.384348 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:49.390589 kubelet[2949]: E0114 01:10:49.387467 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:10:49.458151 kubelet[2949]: E0114 01:10:49.452535 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:10:49.588108 systemd-networkd[1424]: cali03867e6a77a: Gained IPv6LL Jan 14 01:10:49.606306 containerd[1661]: time="2026-01-14T01:10:49.598566097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c9c7ff47-drdgg,Uid:e7d0a51e-3dc4-4308-8f17-61e1305f307f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a81dbff0885819fbb062d6ba6b1670a839589af97c12ac98d91585eabfbd51f0\"" Jan 14 01:10:49.645297 containerd[1661]: time="2026-01-14T01:10:49.643270933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:10:49.742000 audit[5965]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:49.742000 audit[5965]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd5638ed0 a2=0 a3=7fffd5638ebc items=0 ppid=3100 pid=5965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:49.760000 audit[5965]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:49.760000 audit[5965]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffd5638ed0 a2=0 a3=0 items=0 ppid=3100 pid=5965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:49.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:49.771001 containerd[1661]: time="2026-01-14T01:10:49.767441461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:49.829423 containerd[1661]: time="2026-01-14T01:10:49.820988524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbvx7,Uid:1036b5d9-9d65-4e70-adc3-802295ee7a1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2464eb080eed8ba65b0f108b58eaaca697e23970e12dc34c20b3910095eea055\"" Jan 14 01:10:49.831492 containerd[1661]: time="2026-01-14T01:10:49.831264560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:10:49.838264 containerd[1661]: time="2026-01-14T01:10:49.835240902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:49.842275 kubelet[2949]: E0114 01:10:49.841619 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:49.855116 kubelet[2949]: E0114 01:10:49.842475 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:49.855116 kubelet[2949]: E0114 01:10:49.845470 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjqf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:49.855116 kubelet[2949]: E0114 01:10:49.848021 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:10:49.861410 containerd[1661]: time="2026-01-14T01:10:49.861367522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:10:49.989231 containerd[1661]: time="2026-01-14T01:10:49.983168996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mrnrg,Uid:0be5353a-35d3-4a4f-8ef3-74707ad90bb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"794d6d6ffed7c6369653a7b23244b3104ea3257a7d473b80cf266ab7f158cfa9\"" Jan 14 01:10:49.997100 containerd[1661]: time="2026-01-14T01:10:49.997064439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:50.004610 containerd[1661]: time="2026-01-14T01:10:50.004575846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:10:50.006301 containerd[1661]: time="2026-01-14T01:10:50.006270039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:50.011351 kubelet[2949]: E0114 01:10:50.007130 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:10:50.011351 kubelet[2949]: E0114 01:10:50.007185 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:10:50.011351 kubelet[2949]: E0114 01:10:50.007325 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jch8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:50.019330 containerd[1661]: time="2026-01-14T01:10:50.017478825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:10:50.158392 systemd-networkd[1424]: cali3d0d03fff12: Gained IPv6LL Jan 14 01:10:50.164242 containerd[1661]: time="2026-01-14T01:10:50.163505761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:50.258467 containerd[1661]: time="2026-01-14T01:10:50.176115970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:10:50.258467 containerd[1661]: time="2026-01-14T01:10:50.177320806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:50.265364 kubelet[2949]: E0114 01:10:50.251329 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:10:50.265364 kubelet[2949]: E0114 01:10:50.251379 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:10:50.265364 kubelet[2949]: E0114 01:10:50.251575 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jch8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:50.272314 kubelet[2949]: E0114 01:10:50.269137 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:10:50.273159 containerd[1661]: time="2026-01-14T01:10:50.269456203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:10:50.496198 containerd[1661]: time="2026-01-14T01:10:50.491272467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:50.542335 containerd[1661]: time="2026-01-14T01:10:50.542256148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:10:50.543127 containerd[1661]: time="2026-01-14T01:10:50.543093050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:50.553095 kubelet[2949]: E0114 01:10:50.552536 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:10:50.565435 kubelet[2949]: E0114 01:10:50.554623 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:10:50.566278 kubelet[2949]: E0114 01:10:50.556413 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:10:50.570474 kubelet[2949]: E0114 01:10:50.565623 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:10:50.572280 kubelet[2949]: E0114 01:10:50.571364 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcp97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:50.585011 kubelet[2949]: E0114 01:10:50.576572 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:10:50.610082 kubelet[2949]: E0114 01:10:50.607417 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:10:51.050226 kernel: kauditd_printk_skb: 105 callbacks suppressed Jan 14 01:10:51.050363 kernel: audit: type=1325 audit(1768353051.024:817): table=filter:137 family=2 entries=20 op=nft_register_rule pid=5972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:51.024000 audit[5972]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:51.024000 audit[5972]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdfff3b460 a2=0 a3=7ffdfff3b44c items=0 ppid=3100 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:51.149257 kernel: audit: type=1300 audit(1768353051.024:817): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdfff3b460 a2=0 a3=7ffdfff3b44c items=0 ppid=3100 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:51.151548 kernel: audit: type=1327 audit(1768353051.024:817): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:51.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:51.209135 kernel: audit: type=1325 audit(1768353051.154:818): table=nat:138 family=2 entries=14 op=nft_register_rule pid=5972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:51.154000 audit[5972]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:51.154000 audit[5972]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdfff3b460 a2=0 a3=0 items=0 ppid=3100 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:51.374465 kernel: audit: type=1300 audit(1768353051.154:818): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdfff3b460 a2=0 a3=0 items=0 ppid=3100 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:51.380390 kernel: audit: type=1327 audit(1768353051.154:818): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:51.380456 kernel: audit: type=1130 audit(1768353051.327:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.95:22-10.0.0.1:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:51.154000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:51.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.95:22-10.0.0.1:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:51.328045 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:38600.service - OpenSSH per-connection server daemon (10.0.0.1:38600). Jan 14 01:10:51.548155 kubelet[2949]: E0114 01:10:51.548098 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:10:51.552150 kubelet[2949]: E0114 01:10:51.551324 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:10:51.562629 kubelet[2949]: E0114 01:10:51.562581 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:10:51.866000 audit[5979]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:51.883425 sshd[5975]: Accepted publickey for core from 10.0.0.1 port 38600 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:51.890425 sshd-session[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:51.911243 kernel: audit: type=1325 audit(1768353051.866:820): table=filter:139 family=2 entries=20 op=nft_register_rule pid=5979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:51.866000 audit[5979]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc437cde50 a2=0 a3=7ffc437cde3c items=0 ppid=3100 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:51.965409 systemd-logind[1635]: New session 16 of user core. Jan 14 01:10:52.028503 kernel: audit: type=1300 audit(1768353051.866:820): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc437cde50 a2=0 a3=7ffc437cde3c items=0 ppid=3100 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:52.029088 kernel: audit: type=1327 audit(1768353051.866:820): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:51.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:52.031122 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:10:51.879000 audit[5975]: USER_ACCT pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:51.885000 audit[5975]: CRED_ACQ pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:51.885000 audit[5975]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9d1162e0 a2=3 a3=0 items=0 ppid=1 pid=5975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:51.885000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:51.909000 audit[5979]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:51.909000 audit[5979]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc437cde50 a2=0 a3=0 items=0 ppid=3100 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:51.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:52.074000 audit[5975]: USER_START pid=5975 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:52.096000 audit[5981]: CRED_ACQ pid=5981 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:52.983451 sshd[5981]: Connection closed by 10.0.0.1 port 38600 Jan 14 01:10:52.983501 sshd-session[5975]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:52.987000 audit[5975]: USER_END pid=5975 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:52.987000 audit[5975]: CRED_DISP pid=5975 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:52.999420 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:38600.service: Deactivated successfully. Jan 14 01:10:53.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.95:22-10.0.0.1:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:53.024365 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:10:53.035588 systemd-logind[1635]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:10:53.046446 systemd-logind[1635]: Removed session 16. Jan 14 01:10:53.222404 kubelet[2949]: E0114 01:10:53.221284 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:55.231320 kubelet[2949]: E0114 01:10:55.231286 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:55.235589 containerd[1661]: time="2026-01-14T01:10:55.234175042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:10:55.432547 containerd[1661]: time="2026-01-14T01:10:55.432150602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:55.447152 containerd[1661]: time="2026-01-14T01:10:55.447083640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:10:55.447567 containerd[1661]: time="2026-01-14T01:10:55.447499826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:55.448573 kubelet[2949]: E0114 01:10:55.448533 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:10:55.450189 kubelet[2949]: E0114 01:10:55.449078 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:10:55.450189 kubelet[2949]: E0114 01:10:55.449301 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7d3c6d0447df46d88304143bcf710e70,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wskk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79654cf445-zt5b8_calico-system(8c890f23-aecb-4f6e-852c-98f6f05cf99b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:55.452417 containerd[1661]: time="2026-01-14T01:10:55.451128278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:10:55.568244 containerd[1661]: time="2026-01-14T01:10:55.565579374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:55.578470 containerd[1661]: time="2026-01-14T01:10:55.576135545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:10:55.579572 containerd[1661]: time="2026-01-14T01:10:55.578981798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:55.580964 kubelet[2949]: E0114 01:10:55.580588 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:55.580000 audit[6006]: NETFILTER_CFG table=filter:141 family=2 entries=17 op=nft_register_rule pid=6006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:55.580000 audit[6006]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc73b533e0 a2=0 a3=7ffc73b533cc items=0 ppid=3100 pid=6006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:55.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:55.584568 kubelet[2949]: E0114 01:10:55.583615 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:55.584631 kubelet[2949]: E0114 01:10:55.584580 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7ksr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:55.584631 kubelet[2949]: E0114 01:10:55.586188 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:10:55.591544 containerd[1661]: time="2026-01-14T01:10:55.585631508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:10:55.620000 audit[6006]: NETFILTER_CFG table=nat:142 family=2 entries=35 op=nft_register_chain pid=6006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:55.620000 audit[6006]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc73b533e0 a2=0 a3=7ffc73b533cc items=0 ppid=3100 pid=6006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:55.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:55.798339 containerd[1661]: time="2026-01-14T01:10:55.798017078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:55.804811 containerd[1661]: time="2026-01-14T01:10:55.802359913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:10:55.804811 containerd[1661]: time="2026-01-14T01:10:55.802453368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:55.808410 kubelet[2949]: E0114 01:10:55.807841 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:10:55.808410 kubelet[2949]: E0114 01:10:55.808135 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:10:55.808410 kubelet[2949]: E0114 01:10:55.808280 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wskk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79654cf445-zt5b8_calico-system(8c890f23-aecb-4f6e-852c-98f6f05cf99b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:55.814374 kubelet[2949]: E0114 01:10:55.810574 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:10:56.311223 kubelet[2949]: E0114 01:10:56.310585 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:56.550000 audit[6008]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:56.572592 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 01:10:56.576239 kernel: audit: type=1325 audit(1768353056.550:832): table=filter:143 family=2 entries=14 op=nft_register_rule pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:56.620483 kernel: audit: type=1300 audit(1768353056.550:832): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd3f8bb80 a2=0 a3=7ffcd3f8bb6c items=0 ppid=3100 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:56.550000 audit[6008]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd3f8bb80 a2=0 a3=7ffcd3f8bb6c items=0 ppid=3100 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:56.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:56.747223 kernel: audit: type=1327 audit(1768353056.550:832): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:56.753000 audit[6008]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:56.753000 audit[6008]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcd3f8bb80 a2=0 a3=7ffcd3f8bb6c items=0 ppid=3100 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:56.875530 kernel: audit: type=1325 audit(1768353056.753:833): table=nat:144 family=2 entries=56 op=nft_register_chain pid=6008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:10:56.875967 kernel: audit: type=1300 audit(1768353056.753:833): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcd3f8bb80 a2=0 a3=7ffcd3f8bb6c items=0 ppid=3100 pid=6008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:56.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:56.908000 kernel: audit: type=1327 audit(1768353056.753:833): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:10:58.051186 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:43594.service - OpenSSH per-connection server daemon (10.0.0.1:43594). Jan 14 01:10:58.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.95:22-10.0.0.1:43594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:58.102327 kernel: audit: type=1130 audit(1768353058.050:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.95:22-10.0.0.1:43594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:58.363000 audit[6012]: USER_ACCT pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:58.367380 sshd[6012]: Accepted publickey for core from 10.0.0.1 port 43594 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:10:58.392575 sshd-session[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:58.474367 kernel: audit: type=1101 audit(1768353058.363:835): pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:58.474455 kernel: audit: type=1103 audit(1768353058.376:836): pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:58.376000 audit[6012]: CRED_ACQ pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:58.458532 systemd-logind[1635]: New session 17 of user core. Jan 14 01:10:58.545455 kernel: audit: type=1006 audit(1768353058.376:837): pid=6012 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:10:58.376000 audit[6012]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc28932490 a2=3 a3=0 items=0 ppid=1 pid=6012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:58.376000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:58.561228 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:10:58.590000 audit[6012]: USER_START pid=6012 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:58.616000 audit[6016]: CRED_ACQ pid=6016 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.236622 kubelet[2949]: E0114 01:10:59.235193 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:59.365255 sshd[6016]: Connection closed by 10.0.0.1 port 43594 Jan 14 01:10:59.368367 sshd-session[6012]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:59.382000 audit[6012]: USER_END pid=6012 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.383000 audit[6012]: CRED_DISP pid=6012 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.397158 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:43594.service: Deactivated successfully. Jan 14 01:10:59.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.95:22-10.0.0.1:43594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:59.404344 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:10:59.413300 systemd-logind[1635]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:10:59.419138 systemd-logind[1635]: Removed session 17. Jan 14 01:11:02.243802 containerd[1661]: time="2026-01-14T01:11:02.237270470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:11:02.375690 containerd[1661]: time="2026-01-14T01:11:02.375548555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:02.417855 containerd[1661]: time="2026-01-14T01:11:02.416357771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:11:02.418418 containerd[1661]: time="2026-01-14T01:11:02.418384323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:02.424092 kubelet[2949]: E0114 01:11:02.424041 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:02.430217 kubelet[2949]: E0114 01:11:02.425075 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:02.430217 kubelet[2949]: E0114 01:11:02.425534 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjqf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:02.430217 kubelet[2949]: E0114 01:11:02.428535 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:11:02.437454 containerd[1661]: time="2026-01-14T01:11:02.434418793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:11:02.603624 containerd[1661]: time="2026-01-14T01:11:02.597358569Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:02.620189 containerd[1661]: time="2026-01-14T01:11:02.618389564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:11:02.620189 containerd[1661]: time="2026-01-14T01:11:02.618528984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:02.620361 kubelet[2949]: E0114 01:11:02.619285 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:02.620361 kubelet[2949]: E0114 01:11:02.619343 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:02.620361 kubelet[2949]: E0114 01:11:02.619488 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:02.626472 kubelet[2949]: E0114 01:11:02.626182 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:11:03.734009 kubelet[2949]: E0114 01:11:03.730406 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:04.269798 containerd[1661]: time="2026-01-14T01:11:04.269196870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:11:04.438554 containerd[1661]: time="2026-01-14T01:11:04.430163276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:04.448314 containerd[1661]: time="2026-01-14T01:11:04.448264649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:11:04.448546 containerd[1661]: time="2026-01-14T01:11:04.448519364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:04.450295 kubelet[2949]: E0114 01:11:04.449387 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:11:04.450295 kubelet[2949]: E0114 01:11:04.449595 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:11:04.468311 kubelet[2949]: E0114 01:11:04.451629 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcp97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:04.472425 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:52216.service - OpenSSH per-connection server daemon (10.0.0.1:52216). Jan 14 01:11:04.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.95:22-10.0.0.1:52216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:04.488288 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:11:04.488359 kernel: audit: type=1130 audit(1768353064.472:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.95:22-10.0.0.1:52216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:04.508593 kubelet[2949]: E0114 01:11:04.505042 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:11:04.933551 sshd[6063]: Accepted publickey for core from 10.0.0.1 port 52216 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:04.932000 audit[6063]: USER_ACCT pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:04.944067 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:04.937000 audit[6063]: CRED_ACQ pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.012121 systemd-logind[1635]: New session 18 of user core. Jan 14 01:11:05.067435 kernel: audit: type=1101 audit(1768353064.932:844): pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.068078 kernel: audit: type=1103 audit(1768353064.937:845): pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.074584 kernel: audit: type=1006 audit(1768353064.937:846): pid=6063 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:11:04.937000 audit[6063]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce810c920 a2=3 a3=0 items=0 ppid=1 pid=6063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:05.184465 kernel: audit: type=1300 audit(1768353064.937:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce810c920 a2=3 a3=0 items=0 ppid=1 pid=6063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:05.184597 kernel: audit: type=1327 audit(1768353064.937:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:04.937000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:05.179406 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:11:05.217000 audit[6063]: USER_START pid=6063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.329070 kernel: audit: type=1105 audit(1768353065.217:847): pid=6063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.248000 audit[6069]: CRED_ACQ pid=6069 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.379211 kernel: audit: type=1103 audit(1768353065.248:848): pid=6069 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:06.042817 sshd[6069]: Connection closed by 10.0.0.1 port 52216 Jan 14 01:11:06.044620 sshd-session[6063]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:06.054000 audit[6063]: USER_END pid=6063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:06.079438 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:52216.service: Deactivated successfully. Jan 14 01:11:06.088421 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:11:06.114009 systemd-logind[1635]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:11:06.123464 systemd-logind[1635]: Removed session 18. Jan 14 01:11:06.187279 kernel: audit: type=1106 audit(1768353066.054:849): pid=6063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:06.055000 audit[6063]: CRED_DISP pid=6063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:06.262294 kernel: audit: type=1104 audit(1768353066.055:850): pid=6063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:06.266367 kubelet[2949]: E0114 01:11:06.264356 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:06.271505 containerd[1661]: time="2026-01-14T01:11:06.271110052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:11:06.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.95:22-10.0.0.1:52216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:06.414539 containerd[1661]: time="2026-01-14T01:11:06.413378746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:06.440305 containerd[1661]: time="2026-01-14T01:11:06.439848230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:06.441238 containerd[1661]: time="2026-01-14T01:11:06.440464686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:11:06.443267 kubelet[2949]: E0114 01:11:06.442324 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:11:06.443267 kubelet[2949]: E0114 01:11:06.442392 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:11:06.443267 kubelet[2949]: E0114 01:11:06.442528 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jch8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:06.468453 containerd[1661]: time="2026-01-14T01:11:06.466385562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:11:06.599007 containerd[1661]: time="2026-01-14T01:11:06.598449122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:06.605223 containerd[1661]: time="2026-01-14T01:11:06.605173639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:11:06.606136 containerd[1661]: time="2026-01-14T01:11:06.606001816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:06.607451 kubelet[2949]: E0114 01:11:06.607213 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:11:06.607451 kubelet[2949]: E0114 01:11:06.607271 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:11:06.607590 kubelet[2949]: E0114 01:11:06.607423 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jch8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:06.609594 kubelet[2949]: E0114 01:11:06.609242 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:11:07.268105 kubelet[2949]: E0114 01:11:07.264258 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:11:08.268249 kubelet[2949]: E0114 01:11:08.267552 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:11:11.120341 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:11.120555 kernel: audit: type=1130 audit(1768353071.091:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.95:22-10.0.0.1:52224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:11.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.95:22-10.0.0.1:52224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:11.092160 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:52224.service - OpenSSH per-connection server daemon (10.0.0.1:52224). Jan 14 01:11:11.669000 audit[6086]: USER_ACCT pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.680497 sshd[6086]: Accepted publickey for core from 10.0.0.1 port 52224 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:11.693320 sshd-session[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:11.757870 kernel: audit: type=1101 audit(1768353071.669:853): pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.758127 kernel: audit: type=1103 audit(1768353071.679:854): pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.679000 audit[6086]: CRED_ACQ pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.758862 systemd-logind[1635]: New session 19 of user core. Jan 14 01:11:11.915889 kernel: audit: type=1006 audit(1768353071.679:855): pid=6086 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:11:11.679000 audit[6086]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf026e610 a2=3 a3=0 items=0 ppid=1 pid=6086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:11.923601 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:11:11.999051 kernel: audit: type=1300 audit(1768353071.679:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf026e610 a2=3 a3=0 items=0 ppid=1 pid=6086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:11.679000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:12.046032 kernel: audit: type=1327 audit(1768353071.679:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:11.944000 audit[6086]: USER_START pid=6086 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:12.166187 kernel: audit: type=1105 audit(1768353071.944:856): pid=6086 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.952000 audit[6091]: CRED_ACQ pid=6091 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:12.245263 kernel: audit: type=1103 audit(1768353071.952:857): pid=6091 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:12.873630 sshd[6091]: Connection closed by 10.0.0.1 port 52224 Jan 14 01:11:12.894000 audit[6086]: USER_END pid=6086 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:12.874309 sshd-session[6086]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:12.967574 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:52224.service: Deactivated successfully. Jan 14 01:11:13.000414 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:11:13.010317 systemd-logind[1635]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:11:13.033235 kernel: audit: type=1106 audit(1768353072.894:858): pid=6086 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:12.908000 audit[6086]: CRED_DISP pid=6086 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:13.038233 systemd-logind[1635]: Removed session 19. Jan 14 01:11:12.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.95:22-10.0.0.1:52224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:13.134135 kernel: audit: type=1104 audit(1768353072.908:859): pid=6086 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:13.238171 kubelet[2949]: E0114 01:11:13.237122 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:11:14.244557 kubelet[2949]: E0114 01:11:14.244236 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:11:17.942591 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:35782.service - OpenSSH per-connection server daemon (10.0.0.1:35782). Jan 14 01:11:17.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.95:22-10.0.0.1:35782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:17.963247 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:17.963394 kernel: audit: type=1130 audit(1768353077.940:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.95:22-10.0.0.1:35782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:18.268000 audit[6106]: USER_ACCT pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.270582 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 35782 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:18.293911 kubelet[2949]: E0114 01:11:18.285413 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:18.294240 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:18.330117 kernel: audit: type=1101 audit(1768353078.268:862): pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.342612 kubelet[2949]: E0114 01:11:18.338482 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:11:18.285000 audit[6106]: CRED_ACQ pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.375225 kubelet[2949]: E0114 01:11:18.350630 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:18.375281 containerd[1661]: time="2026-01-14T01:11:18.343184581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:11:18.400874 systemd-logind[1635]: New session 20 of user core. Jan 14 01:11:18.469343 kernel: audit: type=1103 audit(1768353078.285:863): pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.474489 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:11:18.564529 kernel: audit: type=1006 audit(1768353078.285:864): pid=6106 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 01:11:18.565383 kernel: audit: type=1300 audit(1768353078.285:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda32b2390 a2=3 a3=0 items=0 ppid=1 pid=6106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:18.285000 audit[6106]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda32b2390 a2=3 a3=0 items=0 ppid=1 pid=6106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:18.571271 containerd[1661]: time="2026-01-14T01:11:18.534584309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:18.571271 containerd[1661]: time="2026-01-14T01:11:18.552350496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:11:18.571271 containerd[1661]: time="2026-01-14T01:11:18.552465300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:18.606298 kernel: audit: type=1327 audit(1768353078.285:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:18.285000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:18.606471 kubelet[2949]: E0114 01:11:18.603341 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:11:18.606471 kubelet[2949]: E0114 01:11:18.603388 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:11:18.606471 kubelet[2949]: E0114 01:11:18.603501 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7d3c6d0447df46d88304143bcf710e70,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wskk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79654cf445-zt5b8_calico-system(8c890f23-aecb-4f6e-852c-98f6f05cf99b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:18.573000 audit[6106]: USER_START pid=6106 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.650384 containerd[1661]: time="2026-01-14T01:11:18.647175238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:11:18.606000 audit[6118]: CRED_ACQ pid=6118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.846200 kernel: audit: type=1105 audit(1768353078.573:865): pid=6106 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.846354 kernel: audit: type=1103 audit(1768353078.606:866): pid=6118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:18.958505 containerd[1661]: time="2026-01-14T01:11:18.958444174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:18.966428 containerd[1661]: time="2026-01-14T01:11:18.965571801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:18.966428 containerd[1661]: time="2026-01-14T01:11:18.966300181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:11:18.968083 kubelet[2949]: E0114 01:11:18.967310 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:11:18.968083 kubelet[2949]: E0114 01:11:18.967377 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:11:18.968083 kubelet[2949]: E0114 01:11:18.967517 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wskk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79654cf445-zt5b8_calico-system(8c890f23-aecb-4f6e-852c-98f6f05cf99b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:18.972605 kubelet[2949]: E0114 01:11:18.972400 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:11:19.442380 sshd[6118]: Connection closed by 10.0.0.1 port 35782 Jan 14 01:11:19.443480 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:19.450000 audit[6106]: USER_END pid=6106 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:19.467179 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:35782.service: Deactivated successfully. Jan 14 01:11:19.467364 systemd-logind[1635]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:11:19.485540 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:11:19.503295 systemd-logind[1635]: Removed session 20. Jan 14 01:11:19.563789 kernel: audit: type=1106 audit(1768353079.450:867): pid=6106 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:19.451000 audit[6106]: CRED_DISP pid=6106 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:19.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.95:22-10.0.0.1:35782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:19.662146 kernel: audit: type=1104 audit(1768353079.451:868): pid=6106 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:21.240625 containerd[1661]: time="2026-01-14T01:11:21.240572858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:11:21.377341 containerd[1661]: time="2026-01-14T01:11:21.373348038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:21.398888 containerd[1661]: time="2026-01-14T01:11:21.398045676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:11:21.398888 containerd[1661]: time="2026-01-14T01:11:21.398160490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:21.405312 kubelet[2949]: E0114 01:11:21.404276 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:11:21.405312 kubelet[2949]: E0114 01:11:21.404335 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:11:21.405312 kubelet[2949]: E0114 01:11:21.404477 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7ksr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd8889796-8dksn_calico-system(4210e14f-14d6-426e-8696-17d6edfc7412): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:21.411413 kubelet[2949]: E0114 01:11:21.406508 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:11:22.261434 kubelet[2949]: E0114 01:11:22.258307 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:11:24.470119 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:50778.service - OpenSSH per-connection server daemon (10.0.0.1:50778). Jan 14 01:11:24.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.95:22-10.0.0.1:50778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:24.515401 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:24.515520 kernel: audit: type=1130 audit(1768353084.469:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.95:22-10.0.0.1:50778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:24.775000 audit[6142]: USER_ACCT pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:24.778421 sshd[6142]: Accepted publickey for core from 10.0.0.1 port 50778 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:24.794476 sshd-session[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:24.839820 kernel: audit: type=1101 audit(1768353084.775:871): pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:24.782000 audit[6142]: CRED_ACQ pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:24.858133 systemd-logind[1635]: New session 21 of user core. Jan 14 01:11:24.934469 kernel: audit: type=1103 audit(1768353084.782:872): pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:24.935491 kernel: audit: type=1006 audit(1768353084.783:873): pid=6142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:11:24.946941 kernel: audit: type=1300 audit(1768353084.783:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccef3fad0 a2=3 a3=0 items=0 ppid=1 pid=6142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:24.783000 audit[6142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccef3fad0 a2=3 a3=0 items=0 ppid=1 pid=6142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:24.783000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:25.060539 kernel: audit: type=1327 audit(1768353084.783:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:25.061505 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:11:25.076000 audit[6142]: USER_START pid=6142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.174497 kernel: audit: type=1105 audit(1768353085.076:874): pid=6142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.174600 kernel: audit: type=1103 audit(1768353085.087:875): pid=6146 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.087000 audit[6146]: CRED_ACQ pid=6146 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.266890 containerd[1661]: time="2026-01-14T01:11:25.266515539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:11:25.402569 containerd[1661]: time="2026-01-14T01:11:25.398146359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:25.423187 containerd[1661]: time="2026-01-14T01:11:25.422894808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:11:25.423411 containerd[1661]: time="2026-01-14T01:11:25.423385314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:25.439188 kubelet[2949]: E0114 01:11:25.438592 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:25.441610 kubelet[2949]: E0114 01:11:25.440230 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:25.442199 kubelet[2949]: E0114 01:11:25.442157 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57c9c7ff47-ct9w8_calico-apiserver(73c10481-1af3-4a40-9a8f-b16adcb34162): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:25.444149 kubelet[2949]: E0114 01:11:25.444099 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:11:25.605562 sshd[6146]: Connection closed by 10.0.0.1 port 50778 Jan 14 01:11:25.613157 sshd-session[6142]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:25.625000 audit[6142]: USER_END pid=6142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.627000 audit[6142]: CRED_DISP pid=6142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.713257 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:50778.service: Deactivated successfully. Jan 14 01:11:25.734171 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:11:25.782575 kernel: audit: type=1106 audit(1768353085.625:876): pid=6142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.783139 kernel: audit: type=1104 audit(1768353085.627:877): pid=6142 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:25.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.95:22-10.0.0.1:50778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:25.800427 systemd-logind[1635]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:11:25.810381 systemd-logind[1635]: Removed session 21. Jan 14 01:11:27.241230 containerd[1661]: time="2026-01-14T01:11:27.241184255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:11:27.362143 containerd[1661]: time="2026-01-14T01:11:27.362079507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:27.366216 containerd[1661]: time="2026-01-14T01:11:27.366161075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:11:27.366434 containerd[1661]: time="2026-01-14T01:11:27.366409749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:27.367200 kubelet[2949]: E0114 01:11:27.367153 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:27.369166 kubelet[2949]: E0114 01:11:27.367929 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:11:27.369166 kubelet[2949]: E0114 01:11:27.368346 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjqf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57c9c7ff47-drdgg_calico-apiserver(e7d0a51e-3dc4-4308-8f17-61e1305f307f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:27.371577 kubelet[2949]: E0114 01:11:27.370172 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:11:29.241320 containerd[1661]: time="2026-01-14T01:11:29.241245453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:11:29.351167 containerd[1661]: time="2026-01-14T01:11:29.351117811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:29.356163 containerd[1661]: time="2026-01-14T01:11:29.355452861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:11:29.356163 containerd[1661]: time="2026-01-14T01:11:29.355560833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:29.356593 kubelet[2949]: E0114 01:11:29.356523 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:11:29.366169 kubelet[2949]: E0114 01:11:29.363164 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:11:29.366169 kubelet[2949]: E0114 01:11:29.363371 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcp97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mrnrg_calico-system(0be5353a-35d3-4a4f-8ef3-74707ad90bb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:29.366169 kubelet[2949]: E0114 01:11:29.365488 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:11:30.676311 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:50792.service - OpenSSH per-connection server daemon (10.0.0.1:50792). Jan 14 01:11:30.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.95:22-10.0.0.1:50792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:30.690268 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:30.690380 kernel: audit: type=1130 audit(1768353090.675:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.95:22-10.0.0.1:50792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:30.993000 audit[6164]: USER_ACCT pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:31.005314 sshd[6164]: Accepted publickey for core from 10.0.0.1 port 50792 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:31.016362 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:31.077519 kernel: audit: type=1101 audit(1768353090.993:880): pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:31.005000 audit[6164]: CRED_ACQ pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:31.091387 systemd-logind[1635]: New session 22 of user core. Jan 14 01:11:31.190871 kernel: audit: type=1103 audit(1768353091.005:881): pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:31.191154 kernel: audit: type=1006 audit(1768353091.009:882): pid=6164 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:11:31.191196 kernel: audit: type=1300 audit(1768353091.009:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0ab642d0 a2=3 a3=0 items=0 ppid=1 pid=6164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:31.009000 audit[6164]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0ab642d0 a2=3 a3=0 items=0 ppid=1 pid=6164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:31.251570 kernel: audit: type=1327 audit(1768353091.009:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:31.009000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:31.276241 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:11:31.293000 audit[6164]: USER_START pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:31.384626 kernel: audit: type=1105 audit(1768353091.293:883): pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:31.314000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:31.500218 kernel: audit: type=1103 audit(1768353091.314:884): pid=6168 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:32.110378 sshd[6168]: Connection closed by 10.0.0.1 port 50792 Jan 14 01:11:32.108454 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:32.199161 kernel: audit: type=1106 audit(1768353092.130:885): pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:32.130000 audit[6164]: USER_END pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:32.145360 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:50792.service: Deactivated successfully. Jan 14 01:11:32.146159 systemd-logind[1635]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:11:32.155603 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:11:32.174157 systemd-logind[1635]: Removed session 22. Jan 14 01:11:32.130000 audit[6164]: CRED_DISP pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:32.256241 kernel: audit: type=1104 audit(1768353092.130:886): pid=6164 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:32.267614 kubelet[2949]: E0114 01:11:32.264531 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:11:32.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.95:22-10.0.0.1:50792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:35.229407 kubelet[2949]: E0114 01:11:35.225354 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:11:36.240490 kubelet[2949]: E0114 01:11:36.239561 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:11:36.248583 containerd[1661]: time="2026-01-14T01:11:36.242918426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:11:36.349596 containerd[1661]: time="2026-01-14T01:11:36.347807051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:36.357247 containerd[1661]: time="2026-01-14T01:11:36.355778953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:36.357247 containerd[1661]: time="2026-01-14T01:11:36.356217021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:11:36.358316 kubelet[2949]: E0114 01:11:36.358196 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:11:36.358410 kubelet[2949]: E0114 01:11:36.358314 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:11:36.358597 kubelet[2949]: E0114 01:11:36.358464 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jch8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:36.369784 containerd[1661]: time="2026-01-14T01:11:36.369184635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:11:36.467612 containerd[1661]: time="2026-01-14T01:11:36.467062323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:11:36.470493 containerd[1661]: time="2026-01-14T01:11:36.470298242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:11:36.470493 containerd[1661]: time="2026-01-14T01:11:36.470474741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:11:36.475500 kubelet[2949]: E0114 01:11:36.471588 2949 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:11:36.475500 kubelet[2949]: E0114 01:11:36.475471 2949 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:11:36.476180 kubelet[2949]: E0114 01:11:36.475765 2949 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jch8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbvx7_calico-system(1036b5d9-9d65-4e70-adc3-802295ee7a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:11:36.479616 kubelet[2949]: E0114 01:11:36.479520 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:11:37.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.95:22-10.0.0.1:60248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:37.151282 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:60248.service - OpenSSH per-connection server daemon (10.0.0.1:60248). Jan 14 01:11:37.159574 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:37.159904 kernel: audit: type=1130 audit(1768353097.149:888): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.95:22-10.0.0.1:60248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:37.454000 audit[6210]: USER_ACCT pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.460199 sshd[6210]: Accepted publickey for core from 10.0.0.1 port 60248 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:37.466587 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:37.461000 audit[6210]: CRED_ACQ pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.489856 systemd-logind[1635]: New session 23 of user core. Jan 14 01:11:37.508528 kernel: audit: type=1101 audit(1768353097.454:889): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.508870 kernel: audit: type=1103 audit(1768353097.461:890): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.508910 kernel: audit: type=1006 audit(1768353097.461:891): pid=6210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:11:37.531917 kernel: audit: type=1300 audit(1768353097.461:891): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2c8e8900 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:37.461000 audit[6210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2c8e8900 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:37.461000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:37.568280 kernel: audit: type=1327 audit(1768353097.461:891): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:37.571350 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:11:37.584000 audit[6210]: USER_START pid=6210 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.633197 kernel: audit: type=1105 audit(1768353097.584:892): pid=6210 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.593000 audit[6214]: CRED_ACQ pid=6214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.663889 kernel: audit: type=1103 audit(1768353097.593:893): pid=6214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.909834 sshd[6214]: Connection closed by 10.0.0.1 port 60248 Jan 14 01:11:37.911569 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:37.950000 audit[6210]: USER_END pid=6210 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.958486 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:60248.service: Deactivated successfully. Jan 14 01:11:37.960364 systemd-logind[1635]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:11:37.964818 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:11:37.971487 systemd-logind[1635]: Removed session 23. Jan 14 01:11:37.950000 audit[6210]: CRED_DISP pid=6210 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:38.004603 kernel: audit: type=1106 audit(1768353097.950:894): pid=6210 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:38.004867 kernel: audit: type=1104 audit(1768353097.950:895): pid=6210 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:37.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.95:22-10.0.0.1:60248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:42.225425 kubelet[2949]: E0114 01:11:42.219363 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:11:42.945218 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:40382.service - OpenSSH per-connection server daemon (10.0.0.1:40382). Jan 14 01:11:42.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.95:22-10.0.0.1:40382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:42.955127 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:42.956145 kernel: audit: type=1130 audit(1768353102.944:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.95:22-10.0.0.1:40382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:43.116349 sshd[6227]: Accepted publickey for core from 10.0.0.1 port 40382 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:43.115000 audit[6227]: USER_ACCT pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.130427 sshd-session[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:43.148790 systemd-logind[1635]: New session 24 of user core. Jan 14 01:11:43.156893 kernel: audit: type=1101 audit(1768353103.115:898): pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.118000 audit[6227]: CRED_ACQ pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.195884 kernel: audit: type=1103 audit(1768353103.118:899): pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.196401 kernel: audit: type=1006 audit(1768353103.118:900): pid=6227 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:11:43.196462 kernel: audit: type=1300 audit(1768353103.118:900): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7e48170 a2=3 a3=0 items=0 ppid=1 pid=6227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:43.118000 audit[6227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7e48170 a2=3 a3=0 items=0 ppid=1 pid=6227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:43.223991 kubelet[2949]: E0114 01:11:43.223426 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:11:43.229522 kernel: audit: type=1327 audit(1768353103.118:900): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:43.118000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:43.242469 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:11:43.252000 audit[6227]: USER_START pid=6227 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.257000 audit[6231]: CRED_ACQ pid=6231 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.328378 kernel: audit: type=1105 audit(1768353103.252:901): pid=6227 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.328531 kernel: audit: type=1103 audit(1768353103.257:902): pid=6231 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.548257 sshd[6231]: Connection closed by 10.0.0.1 port 40382 Jan 14 01:11:43.546980 sshd-session[6227]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:43.552000 audit[6227]: USER_END pid=6227 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.566778 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:40382.service: Deactivated successfully. Jan 14 01:11:43.572169 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:11:43.577139 systemd-logind[1635]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:11:43.585106 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:40388.service - OpenSSH per-connection server daemon (10.0.0.1:40388). Jan 14 01:11:43.589229 systemd-logind[1635]: Removed session 24. Jan 14 01:11:43.602250 kernel: audit: type=1106 audit(1768353103.552:903): pid=6227 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.552000 audit[6227]: CRED_DISP pid=6227 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.95:22-10.0.0.1:40382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:43.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.95:22-10.0.0.1:40388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:43.653801 kernel: audit: type=1104 audit(1768353103.552:904): pid=6227 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.750000 audit[6245]: USER_ACCT pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.754629 sshd[6245]: Accepted publickey for core from 10.0.0.1 port 40388 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:43.756000 audit[6245]: CRED_ACQ pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.756000 audit[6245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6c4f2f0 a2=3 a3=0 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:43.756000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:43.762601 sshd-session[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:43.792885 systemd-logind[1635]: New session 25 of user core. Jan 14 01:11:43.844873 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:11:43.872000 audit[6245]: USER_START pid=6245 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:43.880000 audit[6249]: CRED_ACQ pid=6249 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.251579 kubelet[2949]: E0114 01:11:44.250357 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:11:44.810891 sshd[6249]: Connection closed by 10.0.0.1 port 40388 Jan 14 01:11:44.813419 sshd-session[6245]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:44.831000 audit[6245]: USER_END pid=6245 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.831000 audit[6245]: CRED_DISP pid=6245 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.841524 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:40400.service - OpenSSH per-connection server daemon (10.0.0.1:40400). Jan 14 01:11:44.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.95:22-10.0.0.1:40400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:44.850452 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:40388.service: Deactivated successfully. Jan 14 01:11:44.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.95:22-10.0.0.1:40388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:44.864955 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:11:44.878486 systemd-logind[1635]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:11:44.884408 systemd-logind[1635]: Removed session 25. Jan 14 01:11:45.036000 audit[6260]: USER_ACCT pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:45.038260 sshd[6260]: Accepted publickey for core from 10.0.0.1 port 40400 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:45.041000 audit[6260]: CRED_ACQ pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:45.042000 audit[6260]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa7176ce0 a2=3 a3=0 items=0 ppid=1 pid=6260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.042000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:45.047366 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:45.067334 systemd-logind[1635]: New session 26 of user core. Jan 14 01:11:45.080600 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:11:45.088000 audit[6260]: USER_START pid=6260 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:45.094000 audit[6267]: CRED_ACQ pid=6267 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.277000 audit[6281]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=6281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:46.277000 audit[6281]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc148d9f80 a2=0 a3=7ffc148d9f6c items=0 ppid=3100 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:46.291000 audit[6281]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=6281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:46.291000 audit[6281]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc148d9f80 a2=0 a3=0 items=0 ppid=3100 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:46.302261 sshd[6267]: Connection closed by 10.0.0.1 port 40400 Jan 14 01:11:46.305568 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:46.316000 audit[6260]: USER_END pid=6260 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.317000 audit[6260]: CRED_DISP pid=6260 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.334484 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:40400.service: Deactivated successfully. Jan 14 01:11:46.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.95:22-10.0.0.1:40400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:46.337000 audit[6284]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=6284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:46.337000 audit[6284]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffed014fd60 a2=0 a3=7ffed014fd4c items=0 ppid=3100 pid=6284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:46.341495 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:11:46.345827 systemd-logind[1635]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:11:46.347000 audit[6284]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:46.347000 audit[6284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffed014fd60 a2=0 a3=0 items=0 ppid=3100 pid=6284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:46.352439 systemd-logind[1635]: Removed session 26. Jan 14 01:11:46.356849 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:40410.service - OpenSSH per-connection server daemon (10.0.0.1:40410). Jan 14 01:11:46.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.95:22-10.0.0.1:40410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:46.494000 audit[6288]: USER_ACCT pid=6288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.496893 sshd[6288]: Accepted publickey for core from 10.0.0.1 port 40410 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:46.499000 audit[6288]: CRED_ACQ pid=6288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.499000 audit[6288]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7777d5d0 a2=3 a3=0 items=0 ppid=1 pid=6288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.499000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:46.502923 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:46.517007 systemd-logind[1635]: New session 27 of user core. Jan 14 01:11:46.532418 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:11:46.539000 audit[6288]: USER_START pid=6288 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.543000 audit[6292]: CRED_ACQ pid=6292 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.962778 sshd[6292]: Connection closed by 10.0.0.1 port 40410 Jan 14 01:11:46.964167 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:46.968000 audit[6288]: USER_END pid=6288 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.969000 audit[6288]: CRED_DISP pid=6288 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:46.977853 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:40410.service: Deactivated successfully. Jan 14 01:11:46.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.95:22-10.0.0.1:40410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:46.983985 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:11:46.988180 systemd-logind[1635]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:11:46.994783 systemd[1]: Started sshd@26-10.0.0.95:22-10.0.0.1:40414.service - OpenSSH per-connection server daemon (10.0.0.1:40414). Jan 14 01:11:46.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.95:22-10.0.0.1:40414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:47.001276 systemd-logind[1635]: Removed session 27. Jan 14 01:11:47.137000 audit[6304]: USER_ACCT pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:47.138631 sshd[6304]: Accepted publickey for core from 10.0.0.1 port 40414 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:47.139000 audit[6304]: CRED_ACQ pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:47.140000 audit[6304]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9776d720 a2=3 a3=0 items=0 ppid=1 pid=6304 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:47.140000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:47.142910 sshd-session[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:47.154937 systemd-logind[1635]: New session 28 of user core. Jan 14 01:11:47.167134 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 01:11:47.174000 audit[6304]: USER_START pid=6304 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:47.179000 audit[6308]: CRED_ACQ pid=6308 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:47.226112 kubelet[2949]: E0114 01:11:47.225135 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:11:47.232549 kubelet[2949]: E0114 01:11:47.226953 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:11:47.377557 sshd[6308]: Connection closed by 10.0.0.1 port 40414 Jan 14 01:11:47.378202 sshd-session[6304]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:47.381000 audit[6304]: USER_END pid=6304 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:47.382000 audit[6304]: CRED_DISP pid=6304 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:47.389366 systemd[1]: sshd@26-10.0.0.95:22-10.0.0.1:40414.service: Deactivated successfully. Jan 14 01:11:47.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.95:22-10.0.0.1:40414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:47.393241 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 01:11:47.396362 systemd-logind[1635]: Session 28 logged out. Waiting for processes to exit. Jan 14 01:11:47.400606 systemd-logind[1635]: Removed session 28. Jan 14 01:11:52.231298 kubelet[2949]: E0114 01:11:52.230592 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:11:52.399011 systemd[1]: Started sshd@27-10.0.0.95:22-10.0.0.1:60732.service - OpenSSH per-connection server daemon (10.0.0.1:60732). Jan 14 01:11:52.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.95:22-10.0.0.1:60732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:52.405918 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 01:11:52.406098 kernel: audit: type=1130 audit(1768353112.398:946): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.95:22-10.0.0.1:60732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:52.519000 audit[6322]: USER_ACCT pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.548914 kernel: audit: type=1101 audit(1768353112.519:947): pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.542120 systemd-logind[1635]: New session 29 of user core. Jan 14 01:11:52.528018 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:52.550170 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 60732 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:52.525000 audit[6322]: CRED_ACQ pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.591782 kernel: audit: type=1103 audit(1768353112.525:948): pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.591903 kernel: audit: type=1006 audit(1768353112.525:949): pid=6322 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 01:11:52.591934 kernel: audit: type=1300 audit(1768353112.525:949): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6c829380 a2=3 a3=0 items=0 ppid=1 pid=6322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:52.525000 audit[6322]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6c829380 a2=3 a3=0 items=0 ppid=1 pid=6322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:52.622778 kernel: audit: type=1327 audit(1768353112.525:949): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:52.525000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:52.624369 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 01:11:52.635000 audit[6322]: USER_START pid=6322 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.643000 audit[6326]: CRED_ACQ pid=6326 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.686319 kernel: audit: type=1105 audit(1768353112.635:950): pid=6322 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.686433 kernel: audit: type=1103 audit(1768353112.643:951): pid=6326 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.900771 sshd[6326]: Connection closed by 10.0.0.1 port 60732 Jan 14 01:11:52.900244 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:52.905000 audit[6322]: USER_END pid=6322 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.912612 systemd-logind[1635]: Session 29 logged out. Waiting for processes to exit. Jan 14 01:11:52.915514 systemd[1]: sshd@27-10.0.0.95:22-10.0.0.1:60732.service: Deactivated successfully. Jan 14 01:11:52.921492 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 01:11:52.929121 systemd-logind[1635]: Removed session 29. Jan 14 01:11:52.950905 kernel: audit: type=1106 audit(1768353112.905:952): pid=6322 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.951010 kernel: audit: type=1104 audit(1768353112.905:953): pid=6322 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.905000 audit[6322]: CRED_DISP pid=6322 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:52.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.95:22-10.0.0.1:60732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:53.859000 audit[6340]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=6340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:53.859000 audit[6340]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4ff4ee40 a2=0 a3=7fff4ff4ee2c items=0 ppid=3100 pid=6340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:53.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:53.874000 audit[6340]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=6340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:53.874000 audit[6340]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff4ff4ee40 a2=0 a3=7fff4ff4ee2c items=0 ppid=3100 pid=6340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:53.874000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:54.221471 kubelet[2949]: E0114 01:11:54.220547 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mrnrg" podUID="0be5353a-35d3-4a4f-8ef3-74707ad90bb4" Jan 14 01:11:57.224471 kubelet[2949]: E0114 01:11:57.223290 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-drdgg" podUID="e7d0a51e-3dc4-4308-8f17-61e1305f307f" Jan 14 01:11:57.228291 kubelet[2949]: E0114 01:11:57.227180 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79654cf445-zt5b8" podUID="8c890f23-aecb-4f6e-852c-98f6f05cf99b" Jan 14 01:11:57.939175 systemd[1]: Started sshd@28-10.0.0.95:22-10.0.0.1:60742.service - OpenSSH per-connection server daemon (10.0.0.1:60742). Jan 14 01:11:57.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.95:22-10.0.0.1:60742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:57.955813 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:11:57.955911 kernel: audit: type=1130 audit(1768353117.940:957): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.95:22-10.0.0.1:60742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:58.109000 audit[6342]: USER_ACCT pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.112963 sshd[6342]: Accepted publickey for core from 10.0.0.1 port 60742 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:11:58.116603 sshd-session[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:58.114000 audit[6342]: CRED_ACQ pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.142572 systemd-logind[1635]: New session 30 of user core. Jan 14 01:11:58.175442 kernel: audit: type=1101 audit(1768353118.109:958): pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.175578 kernel: audit: type=1103 audit(1768353118.114:959): pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.201387 kernel: audit: type=1006 audit(1768353118.114:960): pid=6342 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 01:11:58.201691 kernel: audit: type=1300 audit(1768353118.114:960): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda6778950 a2=3 a3=0 items=0 ppid=1 pid=6342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:58.114000 audit[6342]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda6778950 a2=3 a3=0 items=0 ppid=1 pid=6342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:58.202196 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 01:11:58.114000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:58.257837 kernel: audit: type=1327 audit(1768353118.114:960): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:58.231000 audit[6342]: USER_START pid=6342 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.305982 kernel: audit: type=1105 audit(1768353118.231:961): pid=6342 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.236000 audit[6346]: CRED_ACQ pid=6346 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.342140 kernel: audit: type=1103 audit(1768353118.236:962): pid=6346 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.576917 sshd[6346]: Connection closed by 10.0.0.1 port 60742 Jan 14 01:11:58.579211 sshd-session[6342]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:58.589000 audit[6342]: USER_END pid=6342 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.598332 systemd[1]: sshd@28-10.0.0.95:22-10.0.0.1:60742.service: Deactivated successfully. Jan 14 01:11:58.600407 systemd-logind[1635]: Session 30 logged out. Waiting for processes to exit. Jan 14 01:11:58.606408 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 01:11:58.615850 systemd-logind[1635]: Removed session 30. Jan 14 01:11:58.660985 kernel: audit: type=1106 audit(1768353118.589:963): pid=6342 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.661579 kernel: audit: type=1104 audit(1768353118.589:964): pid=6342 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.589000 audit[6342]: CRED_DISP pid=6342 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:58.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.95:22-10.0.0.1:60742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:59.221491 kubelet[2949]: E0114 01:11:59.221437 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd8889796-8dksn" podUID="4210e14f-14d6-426e-8696-17d6edfc7412" Jan 14 01:12:02.224384 kubelet[2949]: E0114 01:12:02.224201 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57c9c7ff47-ct9w8" podUID="73c10481-1af3-4a40-9a8f-b16adcb34162" Jan 14 01:12:03.236257 kubelet[2949]: E0114 01:12:03.234625 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:03.242446 kubelet[2949]: E0114 01:12:03.242166 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:03.243853 kubelet[2949]: E0114 01:12:03.243028 2949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbvx7" podUID="1036b5d9-9d65-4e70-adc3-802295ee7a1e" Jan 14 01:12:03.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.95:22-10.0.0.1:46432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:03.606421 systemd[1]: Started sshd@29-10.0.0.95:22-10.0.0.1:46432.service - OpenSSH per-connection server daemon (10.0.0.1:46432). Jan 14 01:12:03.615803 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:03.615854 kernel: audit: type=1130 audit(1768353123.605:966): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.95:22-10.0.0.1:46432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:03.721363 systemd[1769]: Created slice background.slice - User Background Tasks Slice. Jan 14 01:12:03.737810 systemd[1769]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 14 01:12:03.781024 systemd[1769]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 14 01:12:03.892000 audit[6387]: USER_ACCT pid=6387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:03.944628 kernel: audit: type=1101 audit(1768353123.892:967): pid=6387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:03.931476 sshd-session[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:03.945558 sshd[6387]: Accepted publickey for core from 10.0.0.1 port 46432 ssh2: RSA SHA256:tIFBb+nPlq1ggzrnUIKPfYX8UIonGqjmywyQASmq6QY Jan 14 01:12:03.915000 audit[6387]: CRED_ACQ pid=6387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:03.960788 systemd-logind[1635]: New session 31 of user core. Jan 14 01:12:03.983800 kernel: audit: type=1103 audit(1768353123.915:968): pid=6387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:03.988318 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 01:12:04.049870 kernel: audit: type=1006 audit(1768353123.915:969): pid=6387 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 14 01:12:04.049995 kernel: audit: type=1300 audit(1768353123.915:969): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd91f9180 a2=3 a3=0 items=0 ppid=1 pid=6387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:03.915000 audit[6387]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd91f9180 a2=3 a3=0 items=0 ppid=1 pid=6387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:03.915000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:04.061894 kernel: audit: type=1327 audit(1768353123.915:969): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:04.001000 audit[6387]: USER_START pid=6387 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.097438 kernel: audit: type=1105 audit(1768353124.001:970): pid=6387 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.009000 audit[6394]: CRED_ACQ pid=6394 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.133958 kernel: audit: type=1103 audit(1768353124.009:971): pid=6394 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.405317 sshd[6394]: Connection closed by 10.0.0.1 port 46432 Jan 14 01:12:04.407274 sshd-session[6387]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:04.414000 audit[6387]: USER_END pid=6387 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.425976 systemd[1]: sshd@29-10.0.0.95:22-10.0.0.1:46432.service: Deactivated successfully. Jan 14 01:12:04.426543 systemd-logind[1635]: Session 31 logged out. Waiting for processes to exit. Jan 14 01:12:04.436062 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 01:12:04.441379 systemd-logind[1635]: Removed session 31. Jan 14 01:12:04.458928 kernel: audit: type=1106 audit(1768353124.414:972): pid=6387 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.459204 kernel: audit: type=1104 audit(1768353124.414:973): pid=6387 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.414000 audit[6387]: CRED_DISP pid=6387 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:04.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.95:22-10.0.0.1:46432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:05.218833 kubelet[2949]: E0114 01:12:05.218475 2949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"