Jan 14 06:03:10.299649 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 03:30:44 -00 2026 Jan 14 06:03:10.299686 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:03:10.299705 kernel: BIOS-provided physical RAM map: Jan 14 06:03:10.299716 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 06:03:10.299727 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 06:03:10.299737 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 06:03:10.299750 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 14 06:03:10.299801 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 14 06:03:10.299837 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 06:03:10.299849 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 06:03:10.299866 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 06:03:10.299877 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 06:03:10.299888 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 06:03:10.299899 kernel: NX (Execute Disable) protection: active Jan 14 06:03:10.299912 kernel: APIC: Static calls initialized Jan 14 06:03:10.299930 kernel: SMBIOS 2.8 present. Jan 14 06:03:10.299967 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 14 06:03:10.299978 kernel: DMI: Memory slots populated: 1/1 Jan 14 06:03:10.299990 kernel: Hypervisor detected: KVM Jan 14 06:03:10.300001 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 06:03:10.300012 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 06:03:10.300023 kernel: kvm-clock: using sched offset of 7872279601 cycles Jan 14 06:03:10.300036 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 06:03:10.300047 kernel: tsc: Detected 2445.426 MHz processor Jan 14 06:03:10.300064 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 06:03:10.300076 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 06:03:10.300088 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 06:03:10.300099 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 06:03:10.300112 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 06:03:10.300123 kernel: Using GB pages for direct mapping Jan 14 06:03:10.300134 kernel: ACPI: Early table checksum verification disabled Jan 14 06:03:10.300150 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 14 06:03:10.300162 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:03:10.300174 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:03:10.300186 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:03:10.300197 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 14 06:03:10.300209 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:03:10.300222 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:03:10.300237 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:03:10.300249 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:03:10.300266 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 14 06:03:10.300278 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 14 06:03:10.300290 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 14 06:03:10.300305 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 14 06:03:10.300318 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 14 06:03:10.300329 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 14 06:03:10.300342 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 14 06:03:10.300353 kernel: No NUMA configuration found Jan 14 06:03:10.300365 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 14 06:03:10.300377 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 14 06:03:10.300393 kernel: Zone ranges: Jan 14 06:03:10.300405 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 06:03:10.300417 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 14 06:03:10.300428 kernel: Normal empty Jan 14 06:03:10.300440 kernel: Device empty Jan 14 06:03:10.300452 kernel: Movable zone start for each node Jan 14 06:03:10.300464 kernel: Early memory node ranges Jan 14 06:03:10.300476 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 06:03:10.300492 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 14 06:03:10.300504 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 14 06:03:10.300517 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 06:03:10.300529 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 06:03:10.300629 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 14 06:03:10.300648 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 06:03:10.300661 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 06:03:10.300698 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 06:03:10.300738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 06:03:10.300793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 06:03:10.300807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 06:03:10.300820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 06:03:10.300833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 06:03:10.300845 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 06:03:10.300857 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 06:03:10.300875 kernel: TSC deadline timer available Jan 14 06:03:10.300887 kernel: CPU topo: Max. logical packages: 1 Jan 14 06:03:10.300899 kernel: CPU topo: Max. logical dies: 1 Jan 14 06:03:10.300912 kernel: CPU topo: Max. dies per package: 1 Jan 14 06:03:10.300926 kernel: CPU topo: Max. threads per core: 1 Jan 14 06:03:10.300937 kernel: CPU topo: Num. cores per package: 4 Jan 14 06:03:10.300949 kernel: CPU topo: Num. threads per package: 4 Jan 14 06:03:10.300967 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 06:03:10.300979 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 06:03:10.300991 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 06:03:10.301003 kernel: kvm-guest: setup PV sched yield Jan 14 06:03:10.301015 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 06:03:10.301027 kernel: Booting paravirtualized kernel on KVM Jan 14 06:03:10.301039 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 06:03:10.301051 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 06:03:10.301068 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 06:03:10.301080 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 06:03:10.301092 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 06:03:10.301103 kernel: kvm-guest: PV spinlocks enabled Jan 14 06:03:10.301115 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 06:03:10.301128 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:03:10.301144 kernel: random: crng init done Jan 14 06:03:10.301156 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 06:03:10.301169 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 06:03:10.301181 kernel: Fallback order for Node 0: 0 Jan 14 06:03:10.301194 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 14 06:03:10.301206 kernel: Policy zone: DMA32 Jan 14 06:03:10.301219 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 06:03:10.301232 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 06:03:10.301250 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 06:03:10.301263 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 06:03:10.301275 kernel: Dynamic Preempt: voluntary Jan 14 06:03:10.301288 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 06:03:10.301302 kernel: rcu: RCU event tracing is enabled. Jan 14 06:03:10.301315 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 06:03:10.301328 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 06:03:10.301378 kernel: Rude variant of Tasks RCU enabled. Jan 14 06:03:10.301394 kernel: Tracing variant of Tasks RCU enabled. Jan 14 06:03:10.301407 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 06:03:10.301419 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 06:03:10.301433 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 06:03:10.301446 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 06:03:10.301458 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 06:03:10.301476 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 06:03:10.301489 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 06:03:10.301513 kernel: Console: colour VGA+ 80x25 Jan 14 06:03:10.301528 kernel: printk: legacy console [ttyS0] enabled Jan 14 06:03:10.301541 kernel: ACPI: Core revision 20240827 Jan 14 06:03:10.301554 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 06:03:10.301743 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 06:03:10.301794 kernel: x2apic enabled Jan 14 06:03:10.301807 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 06:03:10.301844 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 06:03:10.301864 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 06:03:10.301877 kernel: kvm-guest: setup PV IPIs Jan 14 06:03:10.301891 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 06:03:10.301903 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 06:03:10.301923 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 06:03:10.301935 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 06:03:10.301947 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 06:03:10.301962 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 06:03:10.301974 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 06:03:10.301987 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 06:03:10.302005 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 06:03:10.302018 kernel: Speculative Store Bypass: Vulnerable Jan 14 06:03:10.302030 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 06:03:10.302044 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 06:03:10.302057 kernel: active return thunk: srso_alias_return_thunk Jan 14 06:03:10.302070 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 06:03:10.302083 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 06:03:10.302100 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 06:03:10.302113 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 06:03:10.303702 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 06:03:10.303747 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 06:03:10.303805 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 06:03:10.303821 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 06:03:10.303836 kernel: Freeing SMP alternatives memory: 32K Jan 14 06:03:10.303860 kernel: pid_max: default: 32768 minimum: 301 Jan 14 06:03:10.303874 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 06:03:10.303888 kernel: landlock: Up and running. Jan 14 06:03:10.303902 kernel: SELinux: Initializing. Jan 14 06:03:10.303917 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 06:03:10.303931 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 06:03:10.303972 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 06:03:10.303992 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 06:03:10.304007 kernel: signal: max sigframe size: 1776 Jan 14 06:03:10.304022 kernel: rcu: Hierarchical SRCU implementation. Jan 14 06:03:10.304038 kernel: rcu: Max phase no-delay instances is 400. Jan 14 06:03:10.304053 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 06:03:10.304068 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 06:03:10.304083 kernel: smp: Bringing up secondary CPUs ... Jan 14 06:03:10.304102 kernel: smpboot: x86: Booting SMP configuration: Jan 14 06:03:10.304116 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 06:03:10.304131 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 06:03:10.304146 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 06:03:10.304163 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 14 06:03:10.304178 kernel: devtmpfs: initialized Jan 14 06:03:10.304193 kernel: x86/mm: Memory block size: 128MB Jan 14 06:03:10.304208 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 06:03:10.304226 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 06:03:10.304241 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 06:03:10.304255 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 06:03:10.304269 kernel: audit: initializing netlink subsys (disabled) Jan 14 06:03:10.304285 kernel: audit: type=2000 audit(1768370583.727:1): state=initialized audit_enabled=0 res=1 Jan 14 06:03:10.304298 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 06:03:10.304347 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 06:03:10.304367 kernel: cpuidle: using governor menu Jan 14 06:03:10.304381 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 06:03:10.304393 kernel: dca service started, version 1.12.1 Jan 14 06:03:10.304406 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 06:03:10.304421 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 06:03:10.304434 kernel: PCI: Using configuration type 1 for base access Jan 14 06:03:10.304448 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 06:03:10.304468 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 06:03:10.304482 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 06:03:10.304496 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 06:03:10.304511 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 06:03:10.304525 kernel: ACPI: Added _OSI(Module Device) Jan 14 06:03:10.304540 kernel: ACPI: Added _OSI(Processor Device) Jan 14 06:03:10.304555 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 06:03:10.304673 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 06:03:10.304688 kernel: ACPI: Interpreter enabled Jan 14 06:03:10.304703 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 06:03:10.304718 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 06:03:10.304731 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 06:03:10.304745 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 06:03:10.304796 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 06:03:10.304818 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 06:03:10.305224 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 06:03:10.305548 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 06:03:10.305958 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 06:03:10.305980 kernel: PCI host bridge to bus 0000:00 Jan 14 06:03:10.306279 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 06:03:10.306631 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 06:03:10.306958 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 06:03:10.307245 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 14 06:03:10.307518 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 06:03:10.307887 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 14 06:03:10.308180 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 06:03:10.308522 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 06:03:10.308956 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 06:03:10.309286 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 06:03:10.309676 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 06:03:10.310044 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 06:03:10.310357 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 06:03:10.310744 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 06:03:10.311094 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 06:03:10.311416 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 06:03:10.312099 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 06:03:10.312440 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 06:03:10.312951 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 14 06:03:10.313223 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 06:03:10.313487 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 06:03:10.313915 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 06:03:10.314182 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 14 06:03:10.314448 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 14 06:03:10.314806 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 14 06:03:10.315074 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 06:03:10.315341 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 06:03:10.315648 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 06:03:10.315976 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 06:03:10.316239 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 14 06:03:10.316505 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 14 06:03:10.316882 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 06:03:10.317143 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 06:03:10.317158 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 06:03:10.317176 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 06:03:10.317188 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 06:03:10.317200 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 06:03:10.317212 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 06:03:10.317223 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 06:03:10.317235 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 06:03:10.317247 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 06:03:10.317262 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 06:03:10.317274 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 06:03:10.317285 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 06:03:10.317297 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 06:03:10.317309 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 06:03:10.317320 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 06:03:10.317332 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 06:03:10.317346 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 06:03:10.317359 kernel: iommu: Default domain type: Translated Jan 14 06:03:10.317370 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 06:03:10.317382 kernel: PCI: Using ACPI for IRQ routing Jan 14 06:03:10.317394 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 06:03:10.317405 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 06:03:10.317417 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 14 06:03:10.317738 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 06:03:10.318042 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 06:03:10.318298 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 06:03:10.318313 kernel: vgaarb: loaded Jan 14 06:03:10.318325 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 06:03:10.318337 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 06:03:10.318348 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 06:03:10.318366 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 06:03:10.318378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 06:03:10.318390 kernel: pnp: PnP ACPI init Jan 14 06:03:10.318744 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 06:03:10.318799 kernel: pnp: PnP ACPI: found 6 devices Jan 14 06:03:10.318812 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 06:03:10.318830 kernel: NET: Registered PF_INET protocol family Jan 14 06:03:10.318841 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 06:03:10.318854 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 06:03:10.318866 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 06:03:10.318878 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 06:03:10.318890 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 06:03:10.318902 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 06:03:10.318917 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 06:03:10.318929 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 06:03:10.318941 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 06:03:10.318953 kernel: NET: Registered PF_XDP protocol family Jan 14 06:03:10.319205 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 06:03:10.319464 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 06:03:10.319814 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 06:03:10.320068 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 14 06:03:10.320307 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 06:03:10.320543 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 14 06:03:10.320557 kernel: PCI: CLS 0 bytes, default 64 Jan 14 06:03:10.320619 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 06:03:10.320632 kernel: Initialise system trusted keyrings Jan 14 06:03:10.320644 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 06:03:10.320661 kernel: Key type asymmetric registered Jan 14 06:03:10.320673 kernel: Asymmetric key parser 'x509' registered Jan 14 06:03:10.320684 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 06:03:10.320696 kernel: io scheduler mq-deadline registered Jan 14 06:03:10.320708 kernel: io scheduler kyber registered Jan 14 06:03:10.320719 kernel: io scheduler bfq registered Jan 14 06:03:10.320731 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 06:03:10.320748 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 06:03:10.320792 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 06:03:10.320804 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 06:03:10.320816 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 06:03:10.320827 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 06:03:10.320839 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 06:03:10.320851 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 06:03:10.320866 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 06:03:10.321142 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 06:03:10.321160 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 06:03:10.321407 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 06:03:10.321794 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T06:03:07 UTC (1768370587) Jan 14 06:03:10.322053 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 06:03:10.322068 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 06:03:10.322086 kernel: NET: Registered PF_INET6 protocol family Jan 14 06:03:10.322098 kernel: Segment Routing with IPv6 Jan 14 06:03:10.322110 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 06:03:10.322122 kernel: NET: Registered PF_PACKET protocol family Jan 14 06:03:10.322134 kernel: Key type dns_resolver registered Jan 14 06:03:10.322145 kernel: IPI shorthand broadcast: enabled Jan 14 06:03:10.322157 kernel: sched_clock: Marking stable (3678021349, 582709356)->(4443085289, -182354584) Jan 14 06:03:10.322172 kernel: registered taskstats version 1 Jan 14 06:03:10.322184 kernel: Loading compiled-in X.509 certificates Jan 14 06:03:10.322196 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 447f89388dd1db788444733bd6b00fe574646ee9' Jan 14 06:03:10.322207 kernel: Demotion targets for Node 0: null Jan 14 06:03:10.322219 kernel: Key type .fscrypt registered Jan 14 06:03:10.322230 kernel: Key type fscrypt-provisioning registered Jan 14 06:03:10.322242 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 06:03:10.322257 kernel: ima: Allocated hash algorithm: sha1 Jan 14 06:03:10.322269 kernel: ima: No architecture policies found Jan 14 06:03:10.322280 kernel: clk: Disabling unused clocks Jan 14 06:03:10.322292 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 06:03:10.322303 kernel: Write protecting the kernel read-only data: 47104k Jan 14 06:03:10.322315 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 06:03:10.322327 kernel: Run /init as init process Jan 14 06:03:10.322342 kernel: with arguments: Jan 14 06:03:10.322354 kernel: /init Jan 14 06:03:10.322366 kernel: with environment: Jan 14 06:03:10.322377 kernel: HOME=/ Jan 14 06:03:10.322389 kernel: TERM=linux Jan 14 06:03:10.322400 kernel: SCSI subsystem initialized Jan 14 06:03:10.322412 kernel: libata version 3.00 loaded. Jan 14 06:03:10.322817 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 06:03:10.322837 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 06:03:10.323095 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 06:03:10.323361 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 06:03:10.323678 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 06:03:10.324026 kernel: scsi host0: ahci Jan 14 06:03:10.324312 kernel: scsi host1: ahci Jan 14 06:03:10.324682 kernel: scsi host2: ahci Jan 14 06:03:10.325005 kernel: scsi host3: ahci Jan 14 06:03:10.325335 kernel: scsi host4: ahci Jan 14 06:03:10.325705 kernel: scsi host5: ahci Jan 14 06:03:10.325730 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 14 06:03:10.325743 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 14 06:03:10.325793 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 14 06:03:10.325806 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 14 06:03:10.325819 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 14 06:03:10.325831 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 14 06:03:10.325843 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 06:03:10.325859 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 06:03:10.325872 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 06:03:10.325884 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 06:03:10.325896 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 06:03:10.325908 kernel: ata3.00: applying bridge limits Jan 14 06:03:10.325920 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 06:03:10.325932 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 06:03:10.325947 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 06:03:10.325959 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 06:03:10.325971 kernel: ata3.00: configured for UDMA/100 Jan 14 06:03:10.326295 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 06:03:10.326630 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 06:03:10.326934 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 06:03:10.326958 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 06:03:10.326971 kernel: GPT:16515071 != 27000831 Jan 14 06:03:10.326983 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 06:03:10.326995 kernel: GPT:16515071 != 27000831 Jan 14 06:03:10.327007 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 06:03:10.327018 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 06:03:10.327306 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 06:03:10.327326 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 06:03:10.327658 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 06:03:10.327676 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 06:03:10.327688 kernel: device-mapper: uevent: version 1.0.3 Jan 14 06:03:10.327701 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 06:03:10.327713 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 06:03:10.327725 kernel: raid6: avx2x4 gen() 22122 MB/s Jan 14 06:03:10.327743 kernel: raid6: avx2x2 gen() 33222 MB/s Jan 14 06:03:10.327793 kernel: raid6: avx2x1 gen() 23507 MB/s Jan 14 06:03:10.327806 kernel: raid6: using algorithm avx2x2 gen() 33222 MB/s Jan 14 06:03:10.327818 kernel: raid6: .... xor() 26726 MB/s, rmw enabled Jan 14 06:03:10.327830 kernel: raid6: using avx2x2 recovery algorithm Jan 14 06:03:10.327842 kernel: xor: automatically using best checksumming function avx Jan 14 06:03:10.327859 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 06:03:10.327875 kernel: BTRFS: device fsid 2c8f2baf-3f08-4641-b860-b6dd41142f72 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 14 06:03:10.327891 kernel: BTRFS info (device dm-0): first mount of filesystem 2c8f2baf-3f08-4641-b860-b6dd41142f72 Jan 14 06:03:10.327903 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:03:10.327916 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 06:03:10.327931 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 06:03:10.327943 kernel: loop: module loaded Jan 14 06:03:10.327956 kernel: loop0: detected capacity change from 0 to 100536 Jan 14 06:03:10.327967 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 06:03:10.327982 systemd[1]: Successfully made /usr/ read-only. Jan 14 06:03:10.327999 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 06:03:10.328015 systemd[1]: Detected virtualization kvm. Jan 14 06:03:10.328027 systemd[1]: Detected architecture x86-64. Jan 14 06:03:10.328040 systemd[1]: Running in initrd. Jan 14 06:03:10.328052 systemd[1]: No hostname configured, using default hostname. Jan 14 06:03:10.328065 systemd[1]: Hostname set to . Jan 14 06:03:10.328078 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 06:03:10.328090 systemd[1]: Queued start job for default target initrd.target. Jan 14 06:03:10.328106 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 06:03:10.328119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:03:10.328132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:03:10.328145 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 06:03:10.328158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 06:03:10.328171 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 06:03:10.328188 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 06:03:10.328201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:03:10.328214 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:03:10.328227 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 06:03:10.328240 systemd[1]: Reached target paths.target - Path Units. Jan 14 06:03:10.328252 systemd[1]: Reached target slices.target - Slice Units. Jan 14 06:03:10.328268 systemd[1]: Reached target swap.target - Swaps. Jan 14 06:03:10.328281 systemd[1]: Reached target timers.target - Timer Units. Jan 14 06:03:10.328294 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 06:03:10.328307 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 06:03:10.328320 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:03:10.328334 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 06:03:10.328346 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 06:03:10.328362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:03:10.328375 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 06:03:10.328387 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:03:10.328400 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 06:03:10.328413 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 06:03:10.328426 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 06:03:10.328438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 06:03:10.328454 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 06:03:10.328468 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 06:03:10.328480 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 06:03:10.328493 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 06:03:10.328505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 06:03:10.328522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:03:10.328536 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 06:03:10.328549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:03:10.328610 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 06:03:10.328666 systemd-journald[316]: Collecting audit messages is enabled. Jan 14 06:03:10.328699 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 06:03:10.328713 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 06:03:10.328726 systemd-journald[316]: Journal started Jan 14 06:03:10.328787 systemd-journald[316]: Runtime Journal (/run/log/journal/b105e1993e5641dbb98b572e6d7d1ec6) is 6M, max 48.2M, 42.1M free. Jan 14 06:03:10.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.339621 kernel: audit: type=1130 audit(1768370590.330:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.339655 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 06:03:10.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.351998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 06:03:10.354277 kernel: audit: type=1130 audit(1768370590.343:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.357159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 06:03:10.380653 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 06:03:10.384640 kernel: Bridge firewalling registered Jan 14 06:03:10.384987 systemd-modules-load[319]: Inserted module 'br_netfilter' Jan 14 06:03:10.387345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 06:03:10.543924 kernel: audit: type=1130 audit(1768370590.533:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.396404 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 06:03:10.559289 kernel: audit: type=1130 audit(1768370590.544:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.544094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:03:10.559393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:03:10.580333 kernel: audit: type=1130 audit(1768370590.565:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.580437 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:03:10.595047 kernel: audit: type=1130 audit(1768370590.581:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.592953 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 06:03:10.603218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 06:03:10.643670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:03:10.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.655528 kernel: audit: type=1130 audit(1768370590.646:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.655641 kernel: audit: type=1334 audit(1768370590.646:9): prog-id=6 op=LOAD Jan 14 06:03:10.646000 audit: BPF prog-id=6 op=LOAD Jan 14 06:03:10.658142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 06:03:10.670046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 06:03:10.687287 kernel: audit: type=1130 audit(1768370590.670:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.696690 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 06:03:10.722474 dracut-cmdline[359]: dracut-109 Jan 14 06:03:10.732015 dracut-cmdline[359]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:03:10.783850 systemd-resolved[351]: Positive Trust Anchors: Jan 14 06:03:10.783897 systemd-resolved[351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 06:03:10.783905 systemd-resolved[351]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 06:03:10.783956 systemd-resolved[351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 06:03:10.843683 systemd-resolved[351]: Defaulting to hostname 'linux'. Jan 14 06:03:10.845823 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 06:03:10.858376 kernel: audit: type=1130 audit(1768370590.848:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:10.848696 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:03:10.957700 kernel: Loading iSCSI transport class v2.0-870. Jan 14 06:03:10.981737 kernel: iscsi: registered transport (tcp) Jan 14 06:03:11.011372 kernel: iscsi: registered transport (qla4xxx) Jan 14 06:03:11.011458 kernel: QLogic iSCSI HBA Driver Jan 14 06:03:11.052519 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 06:03:11.098479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:03:11.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.103948 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 06:03:11.188465 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 06:03:11.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.192135 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 06:03:11.197657 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 06:03:11.265737 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 06:03:11.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.273000 audit: BPF prog-id=7 op=LOAD Jan 14 06:03:11.273000 audit: BPF prog-id=8 op=LOAD Jan 14 06:03:11.275422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:03:11.325345 systemd-udevd[591]: Using default interface naming scheme 'v257'. Jan 14 06:03:11.352404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:03:11.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.364366 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 06:03:11.416674 dracut-pre-trigger[661]: rd.md=0: removing MD RAID activation Jan 14 06:03:11.430009 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 06:03:11.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.440000 audit: BPF prog-id=9 op=LOAD Jan 14 06:03:11.442337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 06:03:11.476083 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 06:03:11.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.483804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 06:03:11.525503 systemd-networkd[709]: lo: Link UP Jan 14 06:03:11.525534 systemd-networkd[709]: lo: Gained carrier Jan 14 06:03:11.526744 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 06:03:11.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.529856 systemd[1]: Reached target network.target - Network. Jan 14 06:03:11.651899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:03:11.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.664753 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 06:03:11.734912 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 06:03:11.751055 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 06:03:11.772222 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 06:03:11.793431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 06:03:11.801661 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 06:03:11.827330 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 06:03:11.840498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 06:03:11.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:11.851054 disk-uuid[768]: Primary Header is updated. Jan 14 06:03:11.851054 disk-uuid[768]: Secondary Entries is updated. Jan 14 06:03:11.851054 disk-uuid[768]: Secondary Header is updated. Jan 14 06:03:11.840816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:03:11.850957 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:03:11.866116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:03:11.906736 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 06:03:11.894515 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:03:11.894523 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 06:03:11.898144 systemd-networkd[709]: eth0: Link UP Jan 14 06:03:11.899944 systemd-networkd[709]: eth0: Gained carrier Jan 14 06:03:11.899966 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:03:11.944679 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 06:03:12.109346 kernel: AES CTR mode by8 optimization enabled Jan 14 06:03:12.037664 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 06:03:12.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:12.117183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:03:12.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:12.122029 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 06:03:12.127453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:03:12.136441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 06:03:12.148023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 06:03:12.191226 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 06:03:12.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:12.923980 disk-uuid[769]: Warning: The kernel is still using the old partition table. Jan 14 06:03:12.923980 disk-uuid[769]: The new table will be used at the next reboot or after you Jan 14 06:03:12.923980 disk-uuid[769]: run partprobe(8) or kpartx(8) Jan 14 06:03:12.923980 disk-uuid[769]: The operation has completed successfully. Jan 14 06:03:12.947097 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 06:03:12.947299 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 06:03:12.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:12.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:12.951340 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 06:03:13.008655 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (861) Jan 14 06:03:13.016751 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:03:13.016828 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:03:13.027084 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:03:13.027113 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:03:13.041653 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:03:13.043206 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 06:03:13.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:13.047868 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 06:03:13.195023 ignition[880]: Ignition 2.24.0 Jan 14 06:03:13.195067 ignition[880]: Stage: fetch-offline Jan 14 06:03:13.195135 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:03:13.195157 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 06:03:13.195273 ignition[880]: parsed url from cmdline: "" Jan 14 06:03:13.195279 ignition[880]: no config URL provided Jan 14 06:03:13.195286 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 06:03:13.195298 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jan 14 06:03:13.195346 ignition[880]: op(1): [started] loading QEMU firmware config module Jan 14 06:03:13.195351 ignition[880]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 06:03:13.214010 ignition[880]: op(1): [finished] loading QEMU firmware config module Jan 14 06:03:13.214037 ignition[880]: QEMU firmware config was not found. Ignoring... Jan 14 06:03:13.415513 ignition[880]: parsing config with SHA512: 5e4ad280d9f9517d357c32fb36b7ac010d8e53dff44fbc9b488e1dbd671ebab62e704dc53db4a8c6a7d3054cad30cd72e72e5de6e0e2265ba47589c40cca422d Jan 14 06:03:13.422364 unknown[880]: fetched base config from "system" Jan 14 06:03:13.422394 unknown[880]: fetched user config from "qemu" Jan 14 06:03:13.422862 ignition[880]: fetch-offline: fetch-offline passed Jan 14 06:03:13.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:13.426688 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 06:03:13.422923 ignition[880]: Ignition finished successfully Jan 14 06:03:13.432107 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 06:03:13.433665 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 06:03:13.490251 ignition[890]: Ignition 2.24.0 Jan 14 06:03:13.490297 ignition[890]: Stage: kargs Jan 14 06:03:13.490541 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:03:13.495179 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 06:03:13.490640 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 06:03:13.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:13.499766 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 06:03:13.492055 ignition[890]: kargs: kargs passed Jan 14 06:03:13.492122 ignition[890]: Ignition finished successfully Jan 14 06:03:13.542176 ignition[897]: Ignition 2.24.0 Jan 14 06:03:13.542208 ignition[897]: Stage: disks Jan 14 06:03:13.542406 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:03:13.542426 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 06:03:13.543926 ignition[897]: disks: disks passed Jan 14 06:03:13.543994 ignition[897]: Ignition finished successfully Jan 14 06:03:13.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:13.553871 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 06:03:13.556435 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 06:03:13.561175 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 06:03:13.566479 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 06:03:13.572114 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 06:03:13.578005 systemd[1]: Reached target basic.target - Basic System. Jan 14 06:03:13.580859 systemd-networkd[709]: eth0: Gained IPv6LL Jan 14 06:03:13.586653 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 06:03:13.643650 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 06:03:13.650880 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 06:03:13.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:13.653899 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 06:03:13.808690 kernel: EXT4-fs (vda9): mounted filesystem 06cc0495-6f26-4e6e-84ba-33c1e3a1737c r/w with ordered data mode. Quota mode: none. Jan 14 06:03:13.810171 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 06:03:13.813519 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 06:03:13.820490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 06:03:13.825345 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 06:03:13.828900 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 06:03:13.828959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 06:03:13.862356 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 14 06:03:13.862392 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:03:13.862418 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:03:13.829001 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 06:03:13.873492 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:03:13.873515 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:03:13.841026 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 06:03:13.864113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 06:03:13.875379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 06:03:14.106302 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 06:03:14.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:14.112068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 06:03:14.119311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 06:03:14.141981 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 06:03:14.148244 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:03:14.171839 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 06:03:14.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:14.196200 ignition[1012]: INFO : Ignition 2.24.0 Jan 14 06:03:14.196200 ignition[1012]: INFO : Stage: mount Jan 14 06:03:14.201009 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:03:14.201009 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 06:03:14.201009 ignition[1012]: INFO : mount: mount passed Jan 14 06:03:14.201009 ignition[1012]: INFO : Ignition finished successfully Jan 14 06:03:14.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:14.209507 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 06:03:14.213469 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 06:03:14.249120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 06:03:14.289933 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Jan 14 06:03:14.289986 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:03:14.290008 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:03:14.301084 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:03:14.301116 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:03:14.303418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 06:03:14.347174 ignition[1041]: INFO : Ignition 2.24.0 Jan 14 06:03:14.347174 ignition[1041]: INFO : Stage: files Jan 14 06:03:14.352620 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:03:14.352620 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 06:03:14.352620 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jan 14 06:03:14.352620 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 06:03:14.352620 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 06:03:14.371357 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 06:03:14.371357 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 06:03:14.371357 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 06:03:14.371357 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 06:03:14.371357 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 14 06:03:14.362013 unknown[1041]: wrote ssh authorized keys file for user: core Jan 14 06:03:14.424460 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 06:03:14.510014 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 06:03:14.510014 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:03:14.524223 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 14 06:03:14.833443 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 06:03:15.299894 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:03:15.299894 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 06:03:15.310786 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 06:03:15.319394 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 06:03:15.319394 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 06:03:15.319394 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 06:03:15.319394 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 06:03:15.336935 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 06:03:15.336935 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 06:03:15.336935 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 06:03:15.368088 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 06:03:15.376161 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 06:03:15.380425 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 06:03:15.380425 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 06:03:15.380425 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 06:03:15.380425 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 06:03:15.380425 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 06:03:15.380425 ignition[1041]: INFO : files: files passed Jan 14 06:03:15.380425 ignition[1041]: INFO : Ignition finished successfully Jan 14 06:03:15.422092 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 14 06:03:15.422138 kernel: audit: type=1130 audit(1768370595.397:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.393559 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 06:03:15.400851 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 06:03:15.429638 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 06:03:15.437010 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 06:03:15.437245 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 06:03:15.459163 kernel: audit: type=1130 audit(1768370595.436:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.459198 kernel: audit: type=1131 audit(1768370595.436:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.469312 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 06:03:15.476463 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:03:15.476463 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:03:15.485653 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:03:15.492936 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 06:03:15.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.500202 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 06:03:15.512191 kernel: audit: type=1130 audit(1768370595.499:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.510862 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 06:03:15.587696 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 06:03:15.587987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 06:03:15.594181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 06:03:15.616981 kernel: audit: type=1130 audit(1768370595.593:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.617027 kernel: audit: type=1131 audit(1768370595.593:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.599680 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 06:03:15.621527 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 06:03:15.623455 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 06:03:15.679894 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 06:03:15.693240 kernel: audit: type=1130 audit(1768370595.680:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.683473 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 06:03:15.719259 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 06:03:15.719411 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:03:15.721249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:03:15.737054 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 06:03:15.738667 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 06:03:15.738937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 06:03:15.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.751133 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 06:03:15.761849 kernel: audit: type=1131 audit(1768370595.747:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.758155 systemd[1]: Stopped target basic.target - Basic System. Jan 14 06:03:15.763256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 06:03:15.770670 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 06:03:15.772082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 06:03:15.777177 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 06:03:15.786526 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 06:03:15.787629 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 06:03:15.792353 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 06:03:15.802516 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 06:03:15.807897 systemd[1]: Stopped target swap.target - Swaps. Jan 14 06:03:15.809294 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 06:03:15.825408 kernel: audit: type=1131 audit(1768370595.813:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.809535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 06:03:15.825757 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:03:15.831388 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:03:15.837164 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 06:03:15.837343 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:03:15.859437 kernel: audit: type=1131 audit(1768370595.847:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.839220 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 06:03:15.839379 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 06:03:15.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.859638 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 06:03:15.859826 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 06:03:15.865099 systemd[1]: Stopped target paths.target - Path Units. Jan 14 06:03:15.870377 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 06:03:15.876675 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:03:15.882732 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 06:03:15.888776 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 06:03:15.895782 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 06:03:15.895982 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 06:03:15.901781 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 06:03:15.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.901957 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 06:03:15.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.903253 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 06:03:15.903391 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:03:15.910384 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 06:03:15.910623 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 06:03:15.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.917641 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 06:03:15.917856 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 06:03:15.931095 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 06:03:15.935683 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 06:03:15.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.935900 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:03:15.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.938869 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 06:03:15.946378 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 06:03:15.946554 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:03:15.955722 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 06:03:15.981982 ignition[1097]: INFO : Ignition 2.24.0 Jan 14 06:03:15.981982 ignition[1097]: INFO : Stage: umount Jan 14 06:03:15.981982 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:03:15.981982 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 06:03:15.981982 ignition[1097]: INFO : umount: umount passed Jan 14 06:03:15.981982 ignition[1097]: INFO : Ignition finished successfully Jan 14 06:03:15.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.955952 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:03:15.961276 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 06:03:15.961448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 06:03:16.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.985203 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 06:03:16.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.985353 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 06:03:16.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:15.987769 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 06:03:15.987927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 06:03:16.000180 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 06:03:16.003037 systemd[1]: Stopped target network.target - Network. Jan 14 06:03:16.004169 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 06:03:16.004230 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 06:03:16.010479 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 06:03:16.010540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 06:03:16.012288 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 06:03:16.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.012343 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 06:03:16.019486 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 06:03:16.019554 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 06:03:16.024431 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 06:03:16.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.033665 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 06:03:16.051978 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 06:03:16.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.079000 audit: BPF prog-id=6 op=UNLOAD Jan 14 06:03:16.052149 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 06:03:16.066350 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 06:03:16.085000 audit: BPF prog-id=9 op=UNLOAD Jan 14 06:03:16.066505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 06:03:16.077053 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 06:03:16.077233 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 06:03:16.080519 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 06:03:16.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.083293 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 06:03:16.083373 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:03:16.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.088560 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 06:03:16.088686 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 06:03:16.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.102416 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 06:03:16.105286 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 06:03:16.105359 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 06:03:16.112546 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 06:03:16.112663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:03:16.117496 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 06:03:16.117553 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 06:03:16.123399 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:03:16.155046 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 06:03:16.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.155257 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:03:16.162150 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 06:03:16.162210 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 06:03:16.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.166635 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 06:03:16.166686 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:03:16.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.172138 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 06:03:16.172198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 06:03:16.178902 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 06:03:16.178967 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 06:03:16.183773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 06:03:16.183854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 06:03:16.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.196429 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 06:03:16.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.198442 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 06:03:16.198509 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:03:16.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.204619 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 06:03:16.204679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:03:16.210364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 06:03:16.210455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:03:16.246856 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 06:03:16.259855 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 06:03:16.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.268795 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 06:03:16.268989 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 06:03:16.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:16.275924 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 06:03:16.282455 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 06:03:16.314680 systemd[1]: Switching root. Jan 14 06:03:16.365008 systemd-journald[316]: Journal stopped Jan 14 06:03:18.040254 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Jan 14 06:03:18.040355 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 06:03:18.040375 kernel: SELinux: policy capability open_perms=1 Jan 14 06:03:18.040396 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 06:03:18.040417 kernel: SELinux: policy capability always_check_network=0 Jan 14 06:03:18.040434 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 06:03:18.040451 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 06:03:18.040469 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 06:03:18.040485 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 06:03:18.040497 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 06:03:18.040514 systemd[1]: Successfully loaded SELinux policy in 78.490ms. Jan 14 06:03:18.040554 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.374ms. Jan 14 06:03:18.040637 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 06:03:18.040662 systemd[1]: Detected virtualization kvm. Jan 14 06:03:18.040683 systemd[1]: Detected architecture x86-64. Jan 14 06:03:18.040696 systemd[1]: Detected first boot. Jan 14 06:03:18.040715 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 06:03:18.040728 zram_generator::config[1143]: No configuration found. Jan 14 06:03:18.040742 kernel: Guest personality initialized and is inactive Jan 14 06:03:18.040757 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 06:03:18.040775 kernel: Initialized host personality Jan 14 06:03:18.040795 kernel: NET: Registered PF_VSOCK protocol family Jan 14 06:03:18.040812 systemd[1]: Populated /etc with preset unit settings. Jan 14 06:03:18.040869 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 06:03:18.040893 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 06:03:18.040915 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 06:03:18.040944 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 06:03:18.040963 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 06:03:18.040976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 06:03:18.040988 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 06:03:18.041009 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 06:03:18.041031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 06:03:18.041045 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 06:03:18.041057 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 06:03:18.041073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:03:18.041088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:03:18.041101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 06:03:18.041113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 06:03:18.041126 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 06:03:18.041138 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 06:03:18.041158 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 06:03:18.041174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:03:18.041187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:03:18.041199 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 06:03:18.041216 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 06:03:18.041236 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 06:03:18.041255 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 06:03:18.041282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:03:18.041297 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 06:03:18.041310 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 06:03:18.041322 systemd[1]: Reached target slices.target - Slice Units. Jan 14 06:03:18.041335 systemd[1]: Reached target swap.target - Swaps. Jan 14 06:03:18.041348 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 06:03:18.041360 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 06:03:18.041375 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 06:03:18.041387 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:03:18.041402 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 06:03:18.041414 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:03:18.041426 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 06:03:18.041438 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 06:03:18.041450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 06:03:18.041466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:03:18.041479 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 06:03:18.041491 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 06:03:18.041505 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 06:03:18.041517 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 06:03:18.041529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:03:18.041546 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 06:03:18.041622 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 06:03:18.041650 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 06:03:18.041669 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 06:03:18.041683 systemd[1]: Reached target machines.target - Containers. Jan 14 06:03:18.041695 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 06:03:18.041708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:03:18.041720 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 06:03:18.041738 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 06:03:18.041756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 06:03:18.041780 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 06:03:18.041796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 06:03:18.041809 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 06:03:18.041860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 06:03:18.041880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 06:03:18.041897 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 06:03:18.041910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 06:03:18.041922 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 06:03:18.041934 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 06:03:18.041947 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:03:18.041964 kernel: ACPI: bus type drm_connector registered Jan 14 06:03:18.041984 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 06:03:18.042006 kernel: fuse: init (API version 7.41) Jan 14 06:03:18.042020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 06:03:18.042032 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 06:03:18.042045 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 06:03:18.042062 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 06:03:18.042074 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 06:03:18.042087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:03:18.042124 systemd-journald[1229]: Collecting audit messages is enabled. Jan 14 06:03:18.042149 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 06:03:18.042167 systemd-journald[1229]: Journal started Jan 14 06:03:18.042188 systemd-journald[1229]: Runtime Journal (/run/log/journal/b105e1993e5641dbb98b572e6d7d1ec6) is 6M, max 48.2M, 42.1M free. Jan 14 06:03:17.690000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 06:03:17.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:17.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:17.962000 audit: BPF prog-id=14 op=UNLOAD Jan 14 06:03:17.962000 audit: BPF prog-id=13 op=UNLOAD Jan 14 06:03:17.964000 audit: BPF prog-id=15 op=LOAD Jan 14 06:03:17.965000 audit: BPF prog-id=16 op=LOAD Jan 14 06:03:17.965000 audit: BPF prog-id=17 op=LOAD Jan 14 06:03:18.037000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 06:03:18.037000 audit[1229]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffd85cec40 a2=4000 a3=0 items=0 ppid=1 pid=1229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:18.037000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 06:03:17.456779 systemd[1]: Queued start job for default target multi-user.target. Jan 14 06:03:17.485323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 06:03:17.486418 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 06:03:17.487332 systemd[1]: systemd-journald.service: Consumed 1.051s CPU time. Jan 14 06:03:18.046874 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 06:03:18.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.051416 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 06:03:18.054701 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 06:03:18.057499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 06:03:18.060498 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 06:03:18.063870 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 06:03:18.066925 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 06:03:18.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.070640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:03:18.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.075502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 06:03:18.075861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 06:03:18.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.080240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 06:03:18.080516 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 06:03:18.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.084958 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 06:03:18.085231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 06:03:18.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.089320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 06:03:18.089617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 06:03:18.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.094129 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 06:03:18.094398 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 06:03:18.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.098427 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 06:03:18.098806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 06:03:18.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.102430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 06:03:18.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.107080 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:03:18.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.113044 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 06:03:18.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.117244 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 06:03:18.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.136129 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 06:03:18.140521 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 06:03:18.146110 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 06:03:18.150898 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 06:03:18.154339 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 06:03:18.154682 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 06:03:18.159240 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 06:03:18.163780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:03:18.163998 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:03:18.166328 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 06:03:18.171475 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 06:03:18.174871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 06:03:18.176481 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 06:03:18.179881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 06:03:18.183772 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 06:03:18.189387 systemd-journald[1229]: Time spent on flushing to /var/log/journal/b105e1993e5641dbb98b572e6d7d1ec6 is 29.747ms for 1090 entries. Jan 14 06:03:18.189387 systemd-journald[1229]: System Journal (/var/log/journal/b105e1993e5641dbb98b572e6d7d1ec6) is 8M, max 163.5M, 155.5M free. Jan 14 06:03:18.229635 systemd-journald[1229]: Received client request to flush runtime journal. Jan 14 06:03:18.229678 kernel: loop1: detected capacity change from 0 to 224512 Jan 14 06:03:18.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.190891 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 06:03:18.202258 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 06:03:18.209428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:03:18.216979 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 06:03:18.225090 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 06:03:18.237865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 06:03:18.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.243085 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 06:03:18.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.248447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:03:18.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.258109 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 06:03:18.266902 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 06:03:18.282274 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 06:03:18.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.288000 audit: BPF prog-id=18 op=LOAD Jan 14 06:03:18.295058 kernel: loop2: detected capacity change from 0 to 50784 Jan 14 06:03:18.288000 audit: BPF prog-id=19 op=LOAD Jan 14 06:03:18.288000 audit: BPF prog-id=20 op=LOAD Jan 14 06:03:18.290162 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 06:03:18.295000 audit: BPF prog-id=21 op=LOAD Jan 14 06:03:18.297013 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 06:03:18.303777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 06:03:18.310000 audit: BPF prog-id=22 op=LOAD Jan 14 06:03:18.310000 audit: BPF prog-id=23 op=LOAD Jan 14 06:03:18.311000 audit: BPF prog-id=24 op=LOAD Jan 14 06:03:18.312958 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 06:03:18.321000 audit: BPF prog-id=25 op=LOAD Jan 14 06:03:18.321000 audit: BPF prog-id=26 op=LOAD Jan 14 06:03:18.321000 audit: BPF prog-id=27 op=LOAD Jan 14 06:03:18.322779 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 06:03:18.332455 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 06:03:18.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.355806 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Jan 14 06:03:18.359663 kernel: loop3: detected capacity change from 0 to 111560 Jan 14 06:03:18.359055 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Jan 14 06:03:18.370933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:03:18.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.394516 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 06:03:18.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.405241 systemd-nsresourced[1283]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 06:03:18.408613 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 06:03:18.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.425672 kernel: loop4: detected capacity change from 0 to 224512 Jan 14 06:03:18.445044 kernel: loop5: detected capacity change from 0 to 50784 Jan 14 06:03:18.464650 kernel: loop6: detected capacity change from 0 to 111560 Jan 14 06:03:18.482061 (sd-merge)[1301]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 06:03:18.488634 (sd-merge)[1301]: Merged extensions into '/usr'. Jan 14 06:03:18.489534 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 06:03:18.493978 systemd-oomd[1279]: No swap; memory pressure usage will be degraded Jan 14 06:03:18.496309 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 06:03:18.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.501218 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 06:03:18.501274 systemd[1]: Reloading... Jan 14 06:03:18.520009 systemd-resolved[1281]: Positive Trust Anchors: Jan 14 06:03:18.520685 systemd-resolved[1281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 06:03:18.520782 systemd-resolved[1281]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 06:03:18.520948 systemd-resolved[1281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 06:03:18.528960 systemd-resolved[1281]: Defaulting to hostname 'linux'. Jan 14 06:03:18.592634 zram_generator::config[1334]: No configuration found. Jan 14 06:03:18.845899 systemd[1]: Reloading finished in 343 ms. Jan 14 06:03:18.878319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 06:03:18.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.882625 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 06:03:18.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.886446 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 06:03:18.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:18.896934 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:03:18.927050 systemd[1]: Starting ensure-sysext.service... Jan 14 06:03:18.931168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 06:03:18.935000 audit: BPF prog-id=8 op=UNLOAD Jan 14 06:03:18.935000 audit: BPF prog-id=7 op=UNLOAD Jan 14 06:03:18.935000 audit: BPF prog-id=28 op=LOAD Jan 14 06:03:18.935000 audit: BPF prog-id=29 op=LOAD Jan 14 06:03:18.939791 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:03:18.951000 audit: BPF prog-id=30 op=LOAD Jan 14 06:03:18.951000 audit: BPF prog-id=25 op=UNLOAD Jan 14 06:03:18.951000 audit: BPF prog-id=31 op=LOAD Jan 14 06:03:18.951000 audit: BPF prog-id=32 op=LOAD Jan 14 06:03:18.951000 audit: BPF prog-id=26 op=UNLOAD Jan 14 06:03:18.951000 audit: BPF prog-id=27 op=UNLOAD Jan 14 06:03:18.953000 audit: BPF prog-id=33 op=LOAD Jan 14 06:03:18.953000 audit: BPF prog-id=22 op=UNLOAD Jan 14 06:03:18.953000 audit: BPF prog-id=34 op=LOAD Jan 14 06:03:18.953000 audit: BPF prog-id=35 op=LOAD Jan 14 06:03:18.953000 audit: BPF prog-id=23 op=UNLOAD Jan 14 06:03:18.953000 audit: BPF prog-id=24 op=UNLOAD Jan 14 06:03:18.955000 audit: BPF prog-id=36 op=LOAD Jan 14 06:03:18.955000 audit: BPF prog-id=15 op=UNLOAD Jan 14 06:03:18.955000 audit: BPF prog-id=37 op=LOAD Jan 14 06:03:18.955000 audit: BPF prog-id=38 op=LOAD Jan 14 06:03:18.955000 audit: BPF prog-id=16 op=UNLOAD Jan 14 06:03:18.955000 audit: BPF prog-id=17 op=UNLOAD Jan 14 06:03:18.957000 audit: BPF prog-id=39 op=LOAD Jan 14 06:03:18.957000 audit: BPF prog-id=18 op=UNLOAD Jan 14 06:03:18.957000 audit: BPF prog-id=40 op=LOAD Jan 14 06:03:18.957000 audit: BPF prog-id=41 op=LOAD Jan 14 06:03:18.957000 audit: BPF prog-id=19 op=UNLOAD Jan 14 06:03:18.957000 audit: BPF prog-id=20 op=UNLOAD Jan 14 06:03:18.958000 audit: BPF prog-id=42 op=LOAD Jan 14 06:03:18.958000 audit: BPF prog-id=21 op=UNLOAD Jan 14 06:03:18.969285 systemd[1]: Reload requested from client PID 1371 ('systemctl') (unit ensure-sysext.service)... Jan 14 06:03:18.969323 systemd[1]: Reloading... Jan 14 06:03:18.971560 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 06:03:18.971674 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 06:03:18.972079 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 06:03:18.974048 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Jan 14 06:03:18.974283 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Jan 14 06:03:18.984464 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 06:03:18.984505 systemd-tmpfiles[1372]: Skipping /boot Jan 14 06:03:18.992085 systemd-udevd[1373]: Using default interface naming scheme 'v257'. Jan 14 06:03:19.008062 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 06:03:19.008205 systemd-tmpfiles[1372]: Skipping /boot Jan 14 06:03:19.054636 zram_generator::config[1408]: No configuration found. Jan 14 06:03:19.197653 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 06:03:19.197729 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 06:03:19.206640 kernel: ACPI: button: Power Button [PWRF] Jan 14 06:03:19.235372 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 06:03:19.235952 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 06:03:19.404218 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 06:03:19.409539 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 06:03:19.412026 systemd[1]: Reloading finished in 442 ms. Jan 14 06:03:19.433177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:03:19.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.443087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:03:19.456000 audit: BPF prog-id=43 op=LOAD Jan 14 06:03:19.456000 audit: BPF prog-id=33 op=UNLOAD Jan 14 06:03:19.456000 audit: BPF prog-id=44 op=LOAD Jan 14 06:03:19.456000 audit: BPF prog-id=45 op=LOAD Jan 14 06:03:19.457000 audit: BPF prog-id=34 op=UNLOAD Jan 14 06:03:19.457000 audit: BPF prog-id=35 op=UNLOAD Jan 14 06:03:19.458000 audit: BPF prog-id=46 op=LOAD Jan 14 06:03:19.458000 audit: BPF prog-id=47 op=LOAD Jan 14 06:03:19.458000 audit: BPF prog-id=28 op=UNLOAD Jan 14 06:03:19.458000 audit: BPF prog-id=29 op=UNLOAD Jan 14 06:03:19.460000 audit: BPF prog-id=48 op=LOAD Jan 14 06:03:19.460000 audit: BPF prog-id=42 op=UNLOAD Jan 14 06:03:19.461000 audit: BPF prog-id=49 op=LOAD Jan 14 06:03:19.461000 audit: BPF prog-id=36 op=UNLOAD Jan 14 06:03:19.461000 audit: BPF prog-id=50 op=LOAD Jan 14 06:03:19.461000 audit: BPF prog-id=51 op=LOAD Jan 14 06:03:19.461000 audit: BPF prog-id=37 op=UNLOAD Jan 14 06:03:19.461000 audit: BPF prog-id=38 op=UNLOAD Jan 14 06:03:19.462000 audit: BPF prog-id=52 op=LOAD Jan 14 06:03:19.462000 audit: BPF prog-id=30 op=UNLOAD Jan 14 06:03:19.462000 audit: BPF prog-id=53 op=LOAD Jan 14 06:03:19.462000 audit: BPF prog-id=54 op=LOAD Jan 14 06:03:19.462000 audit: BPF prog-id=31 op=UNLOAD Jan 14 06:03:19.462000 audit: BPF prog-id=32 op=UNLOAD Jan 14 06:03:19.464000 audit: BPF prog-id=55 op=LOAD Jan 14 06:03:19.464000 audit: BPF prog-id=39 op=UNLOAD Jan 14 06:03:19.464000 audit: BPF prog-id=56 op=LOAD Jan 14 06:03:19.464000 audit: BPF prog-id=57 op=LOAD Jan 14 06:03:19.464000 audit: BPF prog-id=40 op=UNLOAD Jan 14 06:03:19.464000 audit: BPF prog-id=41 op=UNLOAD Jan 14 06:03:19.496977 kernel: kvm_amd: TSC scaling supported Jan 14 06:03:19.497034 kernel: kvm_amd: Nested Virtualization enabled Jan 14 06:03:19.497050 kernel: kvm_amd: Nested Paging enabled Jan 14 06:03:19.499855 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 06:03:19.499911 kernel: kvm_amd: PMU virtualization is disabled Jan 14 06:03:19.542256 systemd[1]: Finished ensure-sysext.service. Jan 14 06:03:19.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.555618 kernel: EDAC MC: Ver: 3.0.0 Jan 14 06:03:19.571110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:03:19.572732 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 06:03:19.576696 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 06:03:19.580247 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:03:19.598054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 06:03:19.603819 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 06:03:19.608133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 06:03:19.613861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 06:03:19.617275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:03:19.617443 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:03:19.624128 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 06:03:19.628958 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 06:03:19.632234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:03:19.636133 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 06:03:19.639000 audit: BPF prog-id=58 op=LOAD Jan 14 06:03:19.641829 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 06:03:19.649000 audit: BPF prog-id=59 op=LOAD Jan 14 06:03:19.651800 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 06:03:19.657756 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 06:03:19.670149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:03:19.674546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:03:19.678365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 06:03:19.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.683256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 06:03:19.684786 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 06:03:19.685087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 06:03:19.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.686679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 06:03:19.686965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 06:03:19.687517 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 06:03:19.687817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 06:03:19.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.692822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 06:03:19.693693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 06:03:19.699000 audit[1511]: SYSTEM_BOOT pid=1511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.702428 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 06:03:19.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.718381 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 06:03:19.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:19.733000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 06:03:19.733000 audit[1532]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcea882c00 a2=420 a3=0 items=0 ppid=1487 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:19.733000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:03:19.733953 augenrules[1532]: No rules Jan 14 06:03:19.735965 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 06:03:19.736397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 06:03:19.738699 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 06:03:19.750705 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 06:03:19.755080 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 06:03:19.801344 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 06:03:19.804281 systemd-networkd[1505]: lo: Link UP Jan 14 06:03:19.804547 systemd-networkd[1505]: lo: Gained carrier Jan 14 06:03:19.807175 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:03:19.807256 systemd-networkd[1505]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 06:03:19.808614 systemd-networkd[1505]: eth0: Link UP Jan 14 06:03:19.809287 systemd-networkd[1505]: eth0: Gained carrier Jan 14 06:03:19.809367 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:03:19.829672 systemd-networkd[1505]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 06:03:19.830758 systemd-timesyncd[1507]: Network configuration changed, trying to establish connection. Jan 14 06:03:19.831869 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 06:03:19.831937 systemd-timesyncd[1507]: Initial clock synchronization to Wed 2026-01-14 06:03:20.226495 UTC. Jan 14 06:03:19.921680 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 06:03:19.926413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:03:19.932707 systemd[1]: Reached target network.target - Network. Jan 14 06:03:19.935294 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 06:03:19.940206 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 06:03:19.945304 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 06:03:19.983311 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 06:03:20.173192 ldconfig[1500]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 06:03:20.182952 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 06:03:20.188495 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 06:03:20.233871 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 06:03:20.237543 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 06:03:20.240976 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 06:03:20.244532 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 06:03:20.248408 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 06:03:20.252816 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 06:03:20.256304 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 06:03:20.260322 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 06:03:20.264934 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 06:03:20.268415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 06:03:20.272537 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 06:03:20.272699 systemd[1]: Reached target paths.target - Path Units. Jan 14 06:03:20.275595 systemd[1]: Reached target timers.target - Timer Units. Jan 14 06:03:20.280330 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 06:03:20.285866 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 06:03:20.291447 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 06:03:20.295975 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 06:03:20.300068 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 06:03:20.307402 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 06:03:20.311789 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 06:03:20.316952 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 06:03:20.321713 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 06:03:20.324527 systemd[1]: Reached target basic.target - Basic System. Jan 14 06:03:20.327266 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 06:03:20.327332 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 06:03:20.329381 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 06:03:20.334813 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 06:03:20.345264 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 06:03:20.350138 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 06:03:20.355042 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 06:03:20.356442 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 06:03:20.366185 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 06:03:20.375181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 06:03:20.377760 jq[1555]: false Jan 14 06:03:20.382406 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 06:03:20.388869 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 06:03:20.392589 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 14 06:03:20.392504 oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 14 06:03:20.394157 extend-filesystems[1556]: Found /dev/vda6 Jan 14 06:03:20.400769 extend-filesystems[1556]: Found /dev/vda9 Jan 14 06:03:20.399278 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 06:03:20.420046 extend-filesystems[1556]: Checking size of /dev/vda9 Jan 14 06:03:20.419889 oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 14 06:03:20.430256 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 14 06:03:20.430256 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 06:03:20.430256 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 14 06:03:20.408082 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 06:03:20.419917 oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 06:03:20.411909 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 06:03:20.420002 oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 14 06:03:20.412727 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 06:03:20.415003 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 06:03:20.425161 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 06:03:20.439732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 06:03:20.449905 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 14 06:03:20.449905 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 06:03:20.451475 extend-filesystems[1556]: Resized partition /dev/vda9 Jan 14 06:03:20.444855 oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 14 06:03:20.445133 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 06:03:20.474829 update_engine[1572]: I20260114 06:03:20.472497 1572 main.cc:92] Flatcar Update Engine starting Jan 14 06:03:20.475116 extend-filesystems[1585]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 06:03:20.484244 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 06:03:20.484312 jq[1574]: true Jan 14 06:03:20.444877 oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 06:03:20.445922 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 06:03:20.446510 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 06:03:20.448085 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 06:03:20.452949 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 06:03:20.454047 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 06:03:20.459666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 06:03:20.460128 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 06:03:20.494191 jq[1594]: true Jan 14 06:03:20.521958 tar[1587]: linux-amd64/LICENSE Jan 14 06:03:20.521958 tar[1587]: linux-amd64/helm Jan 14 06:03:20.543740 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 06:03:20.568678 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 06:03:20.568678 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 06:03:20.568678 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 06:03:20.588822 extend-filesystems[1556]: Resized filesystem in /dev/vda9 Jan 14 06:03:20.584073 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 06:03:20.570473 dbus-daemon[1553]: [system] SELinux support is enabled Jan 14 06:03:20.595308 update_engine[1572]: I20260114 06:03:20.578104 1572 update_check_scheduler.cc:74] Next update check in 2m22s Jan 14 06:03:20.595969 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 06:03:20.596169 systemd-logind[1568]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 06:03:20.596197 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 06:03:20.596439 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 06:03:20.602869 systemd-logind[1568]: New seat seat0. Jan 14 06:03:20.608376 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 06:03:20.614852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 06:03:20.618379 dbus-daemon[1553]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 06:03:20.614900 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 06:03:20.621206 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 06:03:20.621443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 06:03:20.636048 bash[1620]: Updated "/home/core/.ssh/authorized_keys" Jan 14 06:03:20.634286 systemd[1]: Started update-engine.service - Update Engine. Jan 14 06:03:20.637930 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 06:03:20.660591 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 06:03:20.665019 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 06:03:20.670498 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 06:03:20.717068 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 06:03:20.725541 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 06:03:20.760874 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 06:03:20.762328 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 06:03:20.762579 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 06:03:20.763626 containerd[1601]: time="2026-01-14T06:03:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 06:03:20.765254 containerd[1601]: time="2026-01-14T06:03:20.765091776Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 06:03:20.770248 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 06:03:20.783711 containerd[1601]: time="2026-01-14T06:03:20.783498975Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.942µs" Jan 14 06:03:20.783711 containerd[1601]: time="2026-01-14T06:03:20.783555768Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 06:03:20.783886 containerd[1601]: time="2026-01-14T06:03:20.783859574Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 06:03:20.783969 containerd[1601]: time="2026-01-14T06:03:20.783946118Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 06:03:20.784297 containerd[1601]: time="2026-01-14T06:03:20.784273222Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 06:03:20.784402 containerd[1601]: time="2026-01-14T06:03:20.784380255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 06:03:20.784575 containerd[1601]: time="2026-01-14T06:03:20.784547718Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 06:03:20.784757 containerd[1601]: time="2026-01-14T06:03:20.784696394Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 06:03:20.785189 containerd[1601]: time="2026-01-14T06:03:20.785159957Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 06:03:20.785289 containerd[1601]: time="2026-01-14T06:03:20.785268115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 06:03:20.785367 containerd[1601]: time="2026-01-14T06:03:20.785347468Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 06:03:20.785446 containerd[1601]: time="2026-01-14T06:03:20.785427988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 06:03:20.785879 containerd[1601]: time="2026-01-14T06:03:20.785850098Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 06:03:20.785961 containerd[1601]: time="2026-01-14T06:03:20.785942066Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 06:03:20.786186 containerd[1601]: time="2026-01-14T06:03:20.786159876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 06:03:20.786981 containerd[1601]: time="2026-01-14T06:03:20.786953350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 06:03:20.787096 containerd[1601]: time="2026-01-14T06:03:20.787071045Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 06:03:20.787225 containerd[1601]: time="2026-01-14T06:03:20.787173105Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 06:03:20.787425 containerd[1601]: time="2026-01-14T06:03:20.787325092Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 06:03:20.788059 containerd[1601]: time="2026-01-14T06:03:20.788012364Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 06:03:20.788188 containerd[1601]: time="2026-01-14T06:03:20.788143577Z" level=info msg="metadata content store policy set" policy=shared Jan 14 06:03:20.795736 containerd[1601]: time="2026-01-14T06:03:20.795687000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 06:03:20.795846 containerd[1601]: time="2026-01-14T06:03:20.795777802Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 06:03:20.795938 containerd[1601]: time="2026-01-14T06:03:20.795875952Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 06:03:20.795938 containerd[1601]: time="2026-01-14T06:03:20.795898797Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 06:03:20.795938 containerd[1601]: time="2026-01-14T06:03:20.795916953Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 06:03:20.795938 containerd[1601]: time="2026-01-14T06:03:20.795932995Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.795947861Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.795961875Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.795977539Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.796003959Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.796028822Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.796046589Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.796062548Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 06:03:20.796090 containerd[1601]: time="2026-01-14T06:03:20.796079948Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 06:03:20.796370 containerd[1601]: time="2026-01-14T06:03:20.796241061Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 06:03:20.796370 containerd[1601]: time="2026-01-14T06:03:20.796313306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 06:03:20.796370 containerd[1601]: time="2026-01-14T06:03:20.796334364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 06:03:20.796370 containerd[1601]: time="2026-01-14T06:03:20.796347905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 06:03:20.796370 containerd[1601]: time="2026-01-14T06:03:20.796371622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796384343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796398777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796411867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796425366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796467490Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796481894Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796509826Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 06:03:20.796572 containerd[1601]: time="2026-01-14T06:03:20.796563601Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 06:03:20.797712 containerd[1601]: time="2026-01-14T06:03:20.796580170Z" level=info msg="Start snapshots syncer" Jan 14 06:03:20.797712 containerd[1601]: time="2026-01-14T06:03:20.796686131Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 06:03:20.797475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 06:03:20.803504 containerd[1601]: time="2026-01-14T06:03:20.803396741Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 06:03:20.804423 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 06:03:20.804891 containerd[1601]: time="2026-01-14T06:03:20.804788745Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 06:03:20.805263 containerd[1601]: time="2026-01-14T06:03:20.805239388Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 06:03:20.805800 containerd[1601]: time="2026-01-14T06:03:20.805773514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 06:03:20.805997 containerd[1601]: time="2026-01-14T06:03:20.805912067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 06:03:20.806127 containerd[1601]: time="2026-01-14T06:03:20.806103783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 06:03:20.806429 containerd[1601]: time="2026-01-14T06:03:20.806328258Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 06:03:20.806429 containerd[1601]: time="2026-01-14T06:03:20.806358431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 06:03:20.806541 containerd[1601]: time="2026-01-14T06:03:20.806518229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 06:03:20.806786 containerd[1601]: time="2026-01-14T06:03:20.806654394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 06:03:20.806786 containerd[1601]: time="2026-01-14T06:03:20.806675589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 06:03:20.806936 containerd[1601]: time="2026-01-14T06:03:20.806691043Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 06:03:20.807236 containerd[1601]: time="2026-01-14T06:03:20.807212155Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 06:03:20.807373 containerd[1601]: time="2026-01-14T06:03:20.807349214Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 06:03:20.807499 containerd[1601]: time="2026-01-14T06:03:20.807476959Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807729535Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807747743Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807770294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807843066Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807867424Z" level=info msg="runtime interface created" Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807877759Z" level=info msg="created NRI interface" Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807892972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807910097Z" level=info msg="Connect containerd service" Jan 14 06:03:20.808729 containerd[1601]: time="2026-01-14T06:03:20.807935202Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 06:03:20.809567 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 06:03:20.810128 containerd[1601]: time="2026-01-14T06:03:20.810032076Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 06:03:20.813226 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 06:03:20.937447 containerd[1601]: time="2026-01-14T06:03:20.937383278Z" level=info msg="Start subscribing containerd event" Jan 14 06:03:20.937965 containerd[1601]: time="2026-01-14T06:03:20.937854023Z" level=info msg="Start recovering state" Jan 14 06:03:20.938270 containerd[1601]: time="2026-01-14T06:03:20.938073051Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 06:03:20.938420 containerd[1601]: time="2026-01-14T06:03:20.938222074Z" level=info msg="Start event monitor" Jan 14 06:03:20.939677 containerd[1601]: time="2026-01-14T06:03:20.938495184Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 06:03:20.939799 containerd[1601]: time="2026-01-14T06:03:20.939743630Z" level=info msg="Start cni network conf syncer for default" Jan 14 06:03:20.939799 containerd[1601]: time="2026-01-14T06:03:20.939796175Z" level=info msg="Start streaming server" Jan 14 06:03:20.939799 containerd[1601]: time="2026-01-14T06:03:20.939814437Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 06:03:20.939799 containerd[1601]: time="2026-01-14T06:03:20.939826201Z" level=info msg="runtime interface starting up..." Jan 14 06:03:20.939799 containerd[1601]: time="2026-01-14T06:03:20.939834895Z" level=info msg="starting plugins..." Jan 14 06:03:20.939799 containerd[1601]: time="2026-01-14T06:03:20.939859107Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 06:03:20.940582 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 06:03:20.944011 containerd[1601]: time="2026-01-14T06:03:20.943943940Z" level=info msg="containerd successfully booted in 0.181976s" Jan 14 06:03:21.018713 tar[1587]: linux-amd64/README.md Jan 14 06:03:21.047482 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 06:03:21.709279 systemd-networkd[1505]: eth0: Gained IPv6LL Jan 14 06:03:21.713174 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 06:03:21.717676 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 06:03:21.723104 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 06:03:21.728473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:03:21.746586 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 06:03:21.790041 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 06:03:21.790480 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 06:03:21.796084 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 06:03:21.801228 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 06:03:22.680108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:03:22.683752 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 06:03:22.686281 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:03:22.686748 systemd[1]: Startup finished in 5.760s (kernel) + 6.864s (initrd) + 6.184s (userspace) = 18.809s. Jan 14 06:03:22.733775 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 06:03:22.735464 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:45980.service - OpenSSH per-connection server daemon (10.0.0.1:45980). Jan 14 06:03:22.867737 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 45980 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:03:22.870744 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:22.881249 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 06:03:22.883165 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 06:03:22.890095 systemd-logind[1568]: New session 1 of user core. Jan 14 06:03:22.919588 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 06:03:22.924392 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 06:03:22.950852 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:22.955707 systemd-logind[1568]: New session 2 of user core. Jan 14 06:03:23.137945 systemd[1710]: Queued start job for default target default.target. Jan 14 06:03:23.151584 systemd[1710]: Created slice app.slice - User Application Slice. Jan 14 06:03:23.151707 systemd[1710]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 06:03:23.151728 systemd[1710]: Reached target paths.target - Paths. Jan 14 06:03:23.151809 systemd[1710]: Reached target timers.target - Timers. Jan 14 06:03:23.154292 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 06:03:23.158877 systemd[1710]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 06:03:23.179816 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 06:03:23.179956 systemd[1710]: Reached target sockets.target - Sockets. Jan 14 06:03:23.183888 kubelet[1694]: E0114 06:03:23.183812 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:03:23.185105 systemd[1710]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 06:03:23.185253 systemd[1710]: Reached target basic.target - Basic System. Jan 14 06:03:23.185345 systemd[1710]: Reached target default.target - Main User Target. Jan 14 06:03:23.185391 systemd[1710]: Startup finished in 220ms. Jan 14 06:03:23.186134 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 06:03:23.194804 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 06:03:23.195173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:03:23.195471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:03:23.196092 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 265.6M memory peak. Jan 14 06:03:23.223887 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:45988.service - OpenSSH per-connection server daemon (10.0.0.1:45988). Jan 14 06:03:23.290753 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 45988 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:03:23.292680 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:23.299647 systemd-logind[1568]: New session 3 of user core. Jan 14 06:03:23.313800 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 06:03:23.331863 sshd[1733]: Connection closed by 10.0.0.1 port 45988 Jan 14 06:03:23.334869 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 14 06:03:23.340809 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:45988.service: Deactivated successfully. Jan 14 06:03:23.342943 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 06:03:23.344112 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Jan 14 06:03:23.347334 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:45992.service - OpenSSH per-connection server daemon (10.0.0.1:45992). Jan 14 06:03:23.348139 systemd-logind[1568]: Removed session 3. Jan 14 06:03:23.408566 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 45992 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:03:23.410513 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:23.416922 systemd-logind[1568]: New session 4 of user core. Jan 14 06:03:23.430814 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 06:03:23.444607 sshd[1744]: Connection closed by 10.0.0.1 port 45992 Jan 14 06:03:23.445148 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 14 06:03:23.453920 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:45992.service: Deactivated successfully. Jan 14 06:03:23.456468 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 06:03:23.457923 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Jan 14 06:03:23.461470 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:45998.service - OpenSSH per-connection server daemon (10.0.0.1:45998). Jan 14 06:03:23.462307 systemd-logind[1568]: Removed session 4. Jan 14 06:03:23.546250 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 45998 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:03:23.548400 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:23.555943 systemd-logind[1568]: New session 5 of user core. Jan 14 06:03:23.569824 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 06:03:23.588225 sshd[1754]: Connection closed by 10.0.0.1 port 45998 Jan 14 06:03:23.588499 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 14 06:03:23.603116 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:45998.service: Deactivated successfully. Jan 14 06:03:23.605384 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 06:03:23.606782 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Jan 14 06:03:23.609809 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:46010.service - OpenSSH per-connection server daemon (10.0.0.1:46010). Jan 14 06:03:23.610567 systemd-logind[1568]: Removed session 5. Jan 14 06:03:23.688303 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:03:23.690368 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:23.697199 systemd-logind[1568]: New session 6 of user core. Jan 14 06:03:23.710848 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 06:03:23.739313 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 06:03:23.739833 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:03:23.756953 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 14 06:03:23.758813 sshd[1764]: Connection closed by 10.0.0.1 port 46010 Jan 14 06:03:23.759232 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 14 06:03:23.768994 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:46010.service: Deactivated successfully. Jan 14 06:03:23.771284 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 06:03:23.772689 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Jan 14 06:03:23.776141 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). Jan 14 06:03:23.777054 systemd-logind[1568]: Removed session 6. Jan 14 06:03:23.852776 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:03:23.855004 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:23.861077 systemd-logind[1568]: New session 7 of user core. Jan 14 06:03:23.880904 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 06:03:23.900261 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 06:03:23.900958 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:03:23.905955 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 14 06:03:23.915267 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 06:03:23.916041 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:03:23.926158 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 06:03:23.984000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 06:03:23.985333 augenrules[1803]: No rules Jan 14 06:03:23.986861 kernel: kauditd_printk_skb: 181 callbacks suppressed Jan 14 06:03:23.986913 kernel: audit: type=1305 audit(1768370603.984:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 06:03:23.986856 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 06:03:23.987261 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 06:03:23.988346 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 14 06:03:23.989988 sshd[1777]: Connection closed by 10.0.0.1 port 46016 Jan 14 06:03:23.984000 audit[1803]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff99fdc860 a2=420 a3=0 items=0 ppid=1784 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:23.992193 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 14 06:03:24.000104 kernel: audit: type=1300 audit(1768370603.984:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff99fdc860 a2=420 a3=0 items=0 ppid=1784 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:24.000138 kernel: audit: type=1327 audit(1768370603.984:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:03:23.984000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:03:23.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.010104 kernel: audit: type=1130 audit(1768370603.986:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.010131 kernel: audit: type=1131 audit(1768370603.986:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:23.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:23.987000 audit[1778]: USER_END pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.023780 kernel: audit: type=1106 audit(1768370603.987:227): pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.023809 kernel: audit: type=1104 audit(1768370603.987:228): pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:03:23.987000 audit[1778]: CRED_DISP pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.030004 kernel: audit: type=1106 audit(1768370603.993:229): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:23.993000 audit[1772]: USER_END pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:24.040527 kernel: audit: type=1104 audit(1768370603.993:230): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:23.993000 audit[1772]: CRED_DISP pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:24.059492 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:46016.service: Deactivated successfully. Jan 14 06:03:24.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.149:22-10.0.0.1:46016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.061525 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 06:03:24.062563 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Jan 14 06:03:24.065415 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:46032.service - OpenSSH per-connection server daemon (10.0.0.1:46032). Jan 14 06:03:24.066111 systemd-logind[1568]: Removed session 7. Jan 14 06:03:24.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:46032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.067646 kernel: audit: type=1131 audit(1768370604.059:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.149:22-10.0.0.1:46016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.140000 audit[1812]: USER_ACCT pid=1812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:24.142270 sshd[1812]: Accepted publickey for core from 10.0.0.1 port 46032 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:03:24.141000 audit[1812]: CRED_ACQ pid=1812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:24.141000 audit[1812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd63411d40 a2=3 a3=0 items=0 ppid=1 pid=1812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:24.141000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:03:24.144339 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:03:24.150304 systemd-logind[1568]: New session 8 of user core. Jan 14 06:03:24.155829 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 06:03:24.159000 audit[1812]: USER_START pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:24.161000 audit[1816]: CRED_ACQ pid=1816 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:03:24.172000 audit[1817]: USER_ACCT pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.174236 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 06:03:24.172000 audit[1817]: CRED_REFR pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.174742 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:03:24.173000 audit[1817]: USER_START pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:03:24.520009 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 06:03:24.542017 (dockerd)[1839]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 06:03:24.829719 dockerd[1839]: time="2026-01-14T06:03:24.829366495Z" level=info msg="Starting up" Jan 14 06:03:24.830679 dockerd[1839]: time="2026-01-14T06:03:24.830653026Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 06:03:24.849675 dockerd[1839]: time="2026-01-14T06:03:24.849555820Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 06:03:25.167425 dockerd[1839]: time="2026-01-14T06:03:25.165740420Z" level=info msg="Loading containers: start." Jan 14 06:03:25.316621 kernel: hrtimer: interrupt took 4621566 ns Jan 14 06:03:26.043856 kernel: Initializing XFRM netlink socket Jan 14 06:03:26.321000 audit[1893]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.321000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd8f2c7560 a2=0 a3=0 items=0 ppid=1839 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 06:03:26.332000 audit[1895]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.332000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc82f7f1d0 a2=0 a3=0 items=0 ppid=1839 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 06:03:26.337000 audit[1897]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.337000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc293fbd60 a2=0 a3=0 items=0 ppid=1839 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 06:03:26.344000 audit[1899]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.344000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb16bdb70 a2=0 a3=0 items=0 ppid=1839 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 06:03:26.418000 audit[1901]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.418000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbe563210 a2=0 a3=0 items=0 ppid=1839 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 06:03:26.443000 audit[1903]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.443000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdf3ac4c00 a2=0 a3=0 items=0 ppid=1839 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.443000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:03:26.473000 audit[1905]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.473000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffccb7d1a70 a2=0 a3=0 items=0 ppid=1839 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:03:26.541000 audit[1907]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.541000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc5bda2ee0 a2=0 a3=0 items=0 ppid=1839 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.541000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 06:03:26.834000 audit[1910]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.834000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffe272ac90 a2=0 a3=0 items=0 ppid=1839 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 06:03:26.865000 audit[1912]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.865000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd06cf050 a2=0 a3=0 items=0 ppid=1839 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 06:03:26.872000 audit[1914]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.872000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc8070a6d0 a2=0 a3=0 items=0 ppid=1839 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 06:03:26.884000 audit[1916]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.884000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd350d9670 a2=0 a3=0 items=0 ppid=1839 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:03:26.923000 audit[1918]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:26.923000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffed1038530 a2=0 a3=0 items=0 ppid=1839 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:26.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 06:03:27.071000 audit[1948]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.071000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe1cc72920 a2=0 a3=0 items=0 ppid=1839 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 06:03:27.080000 audit[1950]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.080000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff3b987ff0 a2=0 a3=0 items=0 ppid=1839 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 06:03:27.089000 audit[1952]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.089000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd68e2f390 a2=0 a3=0 items=0 ppid=1839 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 06:03:27.094000 audit[1954]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.094000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3b5324e0 a2=0 a3=0 items=0 ppid=1839 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 06:03:27.102000 audit[1956]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.102000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2d50c510 a2=0 a3=0 items=0 ppid=1839 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 06:03:27.107000 audit[1958]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.107000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd275de110 a2=0 a3=0 items=0 ppid=1839 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:03:27.114000 audit[1960]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.114000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc5e2a7fb0 a2=0 a3=0 items=0 ppid=1839 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:03:27.130000 audit[1962]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.130000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc07ebb7e0 a2=0 a3=0 items=0 ppid=1839 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 06:03:27.134000 audit[1964]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.134000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd707a0830 a2=0 a3=0 items=0 ppid=1839 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.134000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 06:03:27.138000 audit[1966]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.138000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff5b680ba0 a2=0 a3=0 items=0 ppid=1839 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 06:03:27.145000 audit[1968]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.145000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffee407ef30 a2=0 a3=0 items=0 ppid=1839 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 06:03:27.153000 audit[1970]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.153000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc46407870 a2=0 a3=0 items=0 ppid=1839 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:03:27.158000 audit[1972]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.158000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe2c3ea820 a2=0 a3=0 items=0 ppid=1839 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 06:03:27.170000 audit[1977]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.170000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce99787d0 a2=0 a3=0 items=0 ppid=1839 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 06:03:27.175000 audit[1979]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.175000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdb27474e0 a2=0 a3=0 items=0 ppid=1839 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 06:03:27.180000 audit[1981]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.180000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcc8bb5230 a2=0 a3=0 items=0 ppid=1839 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 06:03:27.184000 audit[1983]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.184000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc75966060 a2=0 a3=0 items=0 ppid=1839 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 06:03:27.191000 audit[1985]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.191000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc90878de0 a2=0 a3=0 items=0 ppid=1839 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 06:03:27.196000 audit[1987]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:27.196000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd99df2830 a2=0 a3=0 items=0 ppid=1839 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 06:03:27.246000 audit[1991]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.246000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffeaf619e10 a2=0 a3=0 items=0 ppid=1839 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 06:03:27.251000 audit[1993]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.251000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff5c4ce600 a2=0 a3=0 items=0 ppid=1839 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.251000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 06:03:27.270000 audit[2001]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.270000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe03154a90 a2=0 a3=0 items=0 ppid=1839 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 06:03:27.291000 audit[2007]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.291000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdbac9fed0 a2=0 a3=0 items=0 ppid=1839 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 06:03:27.296000 audit[2009]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.296000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffddbe9b6e0 a2=0 a3=0 items=0 ppid=1839 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 06:03:27.305000 audit[2011]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.305000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffca50d2420 a2=0 a3=0 items=0 ppid=1839 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.305000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 06:03:27.310000 audit[2013]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.310000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff6cef36b0 a2=0 a3=0 items=0 ppid=1839 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.310000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:03:27.319000 audit[2015]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:27.319000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1beab990 a2=0 a3=0 items=0 ppid=1839 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:27.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 06:03:27.331313 systemd-networkd[1505]: docker0: Link UP Jan 14 06:03:27.341449 dockerd[1839]: time="2026-01-14T06:03:27.341210615Z" level=info msg="Loading containers: done." Jan 14 06:03:27.382847 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4137447140-merged.mount: Deactivated successfully. Jan 14 06:03:27.385821 dockerd[1839]: time="2026-01-14T06:03:27.385730845Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 06:03:27.385982 dockerd[1839]: time="2026-01-14T06:03:27.385941770Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 06:03:27.386692 dockerd[1839]: time="2026-01-14T06:03:27.386128533Z" level=info msg="Initializing buildkit" Jan 14 06:03:27.447794 dockerd[1839]: time="2026-01-14T06:03:27.447633638Z" level=info msg="Completed buildkit initialization" Jan 14 06:03:27.451937 dockerd[1839]: time="2026-01-14T06:03:27.451784084Z" level=info msg="Daemon has completed initialization" Jan 14 06:03:27.452042 dockerd[1839]: time="2026-01-14T06:03:27.451952759Z" level=info msg="API listen on /run/docker.sock" Jan 14 06:03:27.452930 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 06:03:27.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:28.432977 containerd[1601]: time="2026-01-14T06:03:28.432320181Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 06:03:29.225060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836389378.mount: Deactivated successfully. Jan 14 06:03:30.498505 containerd[1601]: time="2026-01-14T06:03:30.498386553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:30.500190 containerd[1601]: time="2026-01-14T06:03:30.500077082Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 14 06:03:30.501676 containerd[1601]: time="2026-01-14T06:03:30.501638062Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:30.504986 containerd[1601]: time="2026-01-14T06:03:30.504862170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:30.506816 containerd[1601]: time="2026-01-14T06:03:30.506654858Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.074274268s" Jan 14 06:03:30.506816 containerd[1601]: time="2026-01-14T06:03:30.506703074Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 14 06:03:30.507801 containerd[1601]: time="2026-01-14T06:03:30.507745595Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 06:03:31.975036 containerd[1601]: time="2026-01-14T06:03:31.974733171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:31.976233 containerd[1601]: time="2026-01-14T06:03:31.975734667Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 14 06:03:31.977346 containerd[1601]: time="2026-01-14T06:03:31.977272883Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:31.980519 containerd[1601]: time="2026-01-14T06:03:31.980448428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:31.981649 containerd[1601]: time="2026-01-14T06:03:31.981505336Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.473733511s" Jan 14 06:03:31.981719 containerd[1601]: time="2026-01-14T06:03:31.981673353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 14 06:03:31.991123 containerd[1601]: time="2026-01-14T06:03:31.991000379Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 06:03:33.359638 containerd[1601]: time="2026-01-14T06:03:33.359449900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:33.360550 containerd[1601]: time="2026-01-14T06:03:33.360507146Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 14 06:03:33.361710 containerd[1601]: time="2026-01-14T06:03:33.361641324Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:33.364388 containerd[1601]: time="2026-01-14T06:03:33.364307978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:33.365233 containerd[1601]: time="2026-01-14T06:03:33.365163861Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.374112108s" Jan 14 06:03:33.365233 containerd[1601]: time="2026-01-14T06:03:33.365200939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 14 06:03:33.365956 containerd[1601]: time="2026-01-14T06:03:33.365904824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 06:03:33.402415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 06:03:33.406089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:03:33.632885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:03:33.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:33.635962 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 06:03:33.636058 kernel: audit: type=1130 audit(1768370613.631:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:33.655071 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:03:33.720232 kubelet[2132]: E0114 06:03:33.720099 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:03:33.725932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:03:33.726179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:03:33.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:03:33.726917 systemd[1]: kubelet.service: Consumed 254ms CPU time, 110.7M memory peak. Jan 14 06:03:33.737632 kernel: audit: type=1131 audit(1768370613.725:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:03:34.414464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019795421.mount: Deactivated successfully. Jan 14 06:03:34.894792 containerd[1601]: time="2026-01-14T06:03:34.894624869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:34.895969 containerd[1601]: time="2026-01-14T06:03:34.895870394Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 14 06:03:34.897675 containerd[1601]: time="2026-01-14T06:03:34.897520343Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:34.899821 containerd[1601]: time="2026-01-14T06:03:34.899744733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:34.900180 containerd[1601]: time="2026-01-14T06:03:34.900114279Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.534173021s" Jan 14 06:03:34.900180 containerd[1601]: time="2026-01-14T06:03:34.900160342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 14 06:03:34.900737 containerd[1601]: time="2026-01-14T06:03:34.900655461Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 06:03:35.324555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7975130.mount: Deactivated successfully. Jan 14 06:03:36.163318 containerd[1601]: time="2026-01-14T06:03:36.163177367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:36.164478 containerd[1601]: time="2026-01-14T06:03:36.164423699Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 14 06:03:36.165658 containerd[1601]: time="2026-01-14T06:03:36.165626135Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:36.168131 containerd[1601]: time="2026-01-14T06:03:36.168081978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:36.169140 containerd[1601]: time="2026-01-14T06:03:36.169080122Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.268354333s" Jan 14 06:03:36.169140 containerd[1601]: time="2026-01-14T06:03:36.169118022Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 14 06:03:36.169790 containerd[1601]: time="2026-01-14T06:03:36.169696862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 06:03:36.557173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236700381.mount: Deactivated successfully. Jan 14 06:03:36.565449 containerd[1601]: time="2026-01-14T06:03:36.565364791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:03:36.566417 containerd[1601]: time="2026-01-14T06:03:36.566364175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 14 06:03:36.567518 containerd[1601]: time="2026-01-14T06:03:36.567492138Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:03:36.570230 containerd[1601]: time="2026-01-14T06:03:36.570142477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:03:36.570977 containerd[1601]: time="2026-01-14T06:03:36.570910092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 401.156607ms" Jan 14 06:03:36.570977 containerd[1601]: time="2026-01-14T06:03:36.570956305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 06:03:36.571789 containerd[1601]: time="2026-01-14T06:03:36.571719710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 06:03:36.984281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3496717874.mount: Deactivated successfully. Jan 14 06:03:40.197317 containerd[1601]: time="2026-01-14T06:03:40.195395603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:40.197317 containerd[1601]: time="2026-01-14T06:03:40.197238172Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 14 06:03:40.200076 containerd[1601]: time="2026-01-14T06:03:40.199988131Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:40.203396 containerd[1601]: time="2026-01-14T06:03:40.203318055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:03:40.204863 containerd[1601]: time="2026-01-14T06:03:40.204774427Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.633002202s" Jan 14 06:03:40.204863 containerd[1601]: time="2026-01-14T06:03:40.204818649Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 14 06:03:43.904026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 06:03:43.946190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:03:44.258518 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 06:03:44.258815 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 06:03:44.259337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:03:44.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:03:44.268560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:03:44.288681 kernel: audit: type=1130 audit(1768370624.260:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:03:44.354102 systemd[1]: Reload requested from client PID 2290 ('systemctl') (unit session-8.scope)... Jan 14 06:03:44.357064 systemd[1]: Reloading... Jan 14 06:03:44.583629 zram_generator::config[2335]: No configuration found. Jan 14 06:03:45.314310 systemd[1]: Reloading finished in 954 ms. Jan 14 06:03:45.398000 audit: BPF prog-id=63 op=LOAD Jan 14 06:03:45.420876 kernel: audit: type=1334 audit(1768370625.398:285): prog-id=63 op=LOAD Jan 14 06:03:45.420993 kernel: audit: type=1334 audit(1768370625.398:286): prog-id=48 op=UNLOAD Jan 14 06:03:45.398000 audit: BPF prog-id=48 op=UNLOAD Jan 14 06:03:45.403000 audit: BPF prog-id=64 op=LOAD Jan 14 06:03:45.403000 audit: BPF prog-id=59 op=UNLOAD Jan 14 06:03:45.405000 audit: BPF prog-id=65 op=LOAD Jan 14 06:03:45.405000 audit: BPF prog-id=43 op=UNLOAD Jan 14 06:03:45.406000 audit: BPF prog-id=66 op=LOAD Jan 14 06:03:45.406000 audit: BPF prog-id=67 op=LOAD Jan 14 06:03:45.406000 audit: BPF prog-id=44 op=UNLOAD Jan 14 06:03:45.406000 audit: BPF prog-id=45 op=UNLOAD Jan 14 06:03:45.409000 audit: BPF prog-id=68 op=LOAD Jan 14 06:03:45.422400 kernel: audit: type=1334 audit(1768370625.403:287): prog-id=64 op=LOAD Jan 14 06:03:45.422443 kernel: audit: type=1334 audit(1768370625.403:288): prog-id=59 op=UNLOAD Jan 14 06:03:45.422472 kernel: audit: type=1334 audit(1768370625.405:289): prog-id=65 op=LOAD Jan 14 06:03:45.422503 kernel: audit: type=1334 audit(1768370625.405:290): prog-id=43 op=UNLOAD Jan 14 06:03:45.422531 kernel: audit: type=1334 audit(1768370625.406:291): prog-id=66 op=LOAD Jan 14 06:03:45.422655 kernel: audit: type=1334 audit(1768370625.406:292): prog-id=67 op=LOAD Jan 14 06:03:45.422701 kernel: audit: type=1334 audit(1768370625.406:293): prog-id=44 op=UNLOAD Jan 14 06:03:45.409000 audit: BPF prog-id=52 op=UNLOAD Jan 14 06:03:45.409000 audit: BPF prog-id=69 op=LOAD Jan 14 06:03:45.409000 audit: BPF prog-id=70 op=LOAD Jan 14 06:03:45.410000 audit: BPF prog-id=53 op=UNLOAD Jan 14 06:03:45.410000 audit: BPF prog-id=54 op=UNLOAD Jan 14 06:03:45.415000 audit: BPF prog-id=71 op=LOAD Jan 14 06:03:45.415000 audit: BPF prog-id=55 op=UNLOAD Jan 14 06:03:45.416000 audit: BPF prog-id=72 op=LOAD Jan 14 06:03:45.416000 audit: BPF prog-id=73 op=LOAD Jan 14 06:03:45.416000 audit: BPF prog-id=56 op=UNLOAD Jan 14 06:03:45.416000 audit: BPF prog-id=57 op=UNLOAD Jan 14 06:03:45.422000 audit: BPF prog-id=74 op=LOAD Jan 14 06:03:45.422000 audit: BPF prog-id=60 op=UNLOAD Jan 14 06:03:45.422000 audit: BPF prog-id=75 op=LOAD Jan 14 06:03:45.422000 audit: BPF prog-id=76 op=LOAD Jan 14 06:03:45.422000 audit: BPF prog-id=61 op=UNLOAD Jan 14 06:03:45.422000 audit: BPF prog-id=62 op=UNLOAD Jan 14 06:03:45.423000 audit: BPF prog-id=77 op=LOAD Jan 14 06:03:45.423000 audit: BPF prog-id=78 op=LOAD Jan 14 06:03:45.423000 audit: BPF prog-id=46 op=UNLOAD Jan 14 06:03:45.423000 audit: BPF prog-id=47 op=UNLOAD Jan 14 06:03:45.424000 audit: BPF prog-id=79 op=LOAD Jan 14 06:03:45.424000 audit: BPF prog-id=49 op=UNLOAD Jan 14 06:03:45.424000 audit: BPF prog-id=80 op=LOAD Jan 14 06:03:45.424000 audit: BPF prog-id=81 op=LOAD Jan 14 06:03:45.424000 audit: BPF prog-id=50 op=UNLOAD Jan 14 06:03:45.424000 audit: BPF prog-id=51 op=UNLOAD Jan 14 06:03:45.428000 audit: BPF prog-id=82 op=LOAD Jan 14 06:03:45.428000 audit: BPF prog-id=58 op=UNLOAD Jan 14 06:03:45.465984 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 06:03:45.466217 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 06:03:45.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:03:45.467973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:03:45.469144 systemd[1]: kubelet.service: Consumed 216ms CPU time, 98.5M memory peak. Jan 14 06:03:45.472534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:03:45.983206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:03:45.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:46.016428 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 06:03:46.152511 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:03:46.152511 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 06:03:46.152511 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:03:46.153105 kubelet[2382]: I0114 06:03:46.152716 2382 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 06:03:46.745881 kubelet[2382]: I0114 06:03:46.744395 2382 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 06:03:46.745881 kubelet[2382]: I0114 06:03:46.744450 2382 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 06:03:46.745881 kubelet[2382]: I0114 06:03:46.745034 2382 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 06:03:46.869659 kubelet[2382]: E0114 06:03:46.869246 2382 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:46.871532 kubelet[2382]: I0114 06:03:46.871368 2382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 06:03:46.897744 kubelet[2382]: I0114 06:03:46.897712 2382 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 06:03:46.916629 kubelet[2382]: I0114 06:03:46.916456 2382 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 06:03:46.922699 kubelet[2382]: I0114 06:03:46.922429 2382 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 06:03:46.923928 kubelet[2382]: I0114 06:03:46.922509 2382 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 06:03:46.924514 kubelet[2382]: I0114 06:03:46.923943 2382 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 06:03:46.924514 kubelet[2382]: I0114 06:03:46.923965 2382 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 06:03:46.924514 kubelet[2382]: I0114 06:03:46.924243 2382 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:03:46.933396 kubelet[2382]: I0114 06:03:46.933277 2382 kubelet.go:446] "Attempting to sync node with API server" Jan 14 06:03:46.935284 kubelet[2382]: I0114 06:03:46.935135 2382 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 06:03:46.935284 kubelet[2382]: I0114 06:03:46.935220 2382 kubelet.go:352] "Adding apiserver pod source" Jan 14 06:03:46.935284 kubelet[2382]: I0114 06:03:46.935239 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 06:03:46.942031 kubelet[2382]: I0114 06:03:46.941928 2382 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 06:03:46.942757 kubelet[2382]: I0114 06:03:46.942553 2382 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 06:03:46.944735 kubelet[2382]: W0114 06:03:46.944372 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 06:03:46.949216 kubelet[2382]: W0114 06:03:46.947172 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:46.949216 kubelet[2382]: I0114 06:03:46.947225 2382 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 06:03:46.949216 kubelet[2382]: E0114 06:03:46.947242 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:46.949216 kubelet[2382]: I0114 06:03:46.947263 2382 server.go:1287] "Started kubelet" Jan 14 06:03:46.949216 kubelet[2382]: I0114 06:03:46.947749 2382 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 06:03:46.950162 kubelet[2382]: W0114 06:03:46.950010 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:46.950211 kubelet[2382]: E0114 06:03:46.950165 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:46.953283 kubelet[2382]: I0114 06:03:46.952270 2382 server.go:479] "Adding debug handlers to kubelet server" Jan 14 06:03:46.971821 kubelet[2382]: I0114 06:03:46.971517 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 06:03:46.972401 kubelet[2382]: I0114 06:03:46.971924 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 06:03:46.976381 kubelet[2382]: I0114 06:03:46.973258 2382 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 06:03:46.983455 kubelet[2382]: I0114 06:03:46.983411 2382 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 06:03:46.984552 kubelet[2382]: I0114 06:03:46.983808 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 06:03:46.987421 kubelet[2382]: E0114 06:03:46.977034 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a83b74a6cd015 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 06:03:46.947239957 +0000 UTC m=+0.923853621,LastTimestamp:2026-01-14 06:03:46.947239957 +0000 UTC m=+0.923853621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 06:03:46.992809 kubelet[2382]: I0114 06:03:46.992522 2382 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 06:03:47.000454 kubelet[2382]: I0114 06:03:47.000075 2382 reconciler.go:26] "Reconciler: start to sync state" Jan 14 06:03:47.004475 kubelet[2382]: W0114 06:03:47.003814 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:47.004475 kubelet[2382]: E0114 06:03:47.003899 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:47.004475 kubelet[2382]: E0114 06:03:47.004350 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Jan 14 06:03:47.004475 kubelet[2382]: E0114 06:03:47.004461 2382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 06:03:47.007920 kubelet[2382]: E0114 06:03:47.007894 2382 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 06:03:47.008517 kubelet[2382]: I0114 06:03:47.008424 2382 factory.go:221] Registration of the containerd container factory successfully Jan 14 06:03:47.008517 kubelet[2382]: I0114 06:03:47.008492 2382 factory.go:221] Registration of the systemd container factory successfully Jan 14 06:03:47.008852 kubelet[2382]: I0114 06:03:47.008782 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 06:03:47.008000 audit[2396]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.008000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde9939120 a2=0 a3=0 items=0 ppid=2382 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 06:03:47.012000 audit[2397]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.012000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcedcb6270 a2=0 a3=0 items=0 ppid=2382 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 06:03:47.018000 audit[2399]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.018000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe5e5da260 a2=0 a3=0 items=0 ppid=2382 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:03:47.036000 audit[2403]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.036000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc66374840 a2=0 a3=0 items=0 ppid=2382 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.036000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:03:47.063379 kubelet[2382]: I0114 06:03:47.062306 2382 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 06:03:47.063379 kubelet[2382]: I0114 06:03:47.062328 2382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 06:03:47.063379 kubelet[2382]: I0114 06:03:47.062477 2382 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:03:47.067000 audit[2408]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.067000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe40bd5ca0 a2=0 a3=0 items=0 ppid=2382 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 06:03:47.069651 kubelet[2382]: I0114 06:03:47.068836 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 06:03:47.071000 audit[2409]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:47.071000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe1241ccb0 a2=0 a3=0 items=0 ppid=2382 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 06:03:47.072000 audit[2410]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.072000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc94503b80 a2=0 a3=0 items=0 ppid=2382 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 06:03:47.076017 kubelet[2382]: I0114 06:03:47.072533 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 06:03:47.076017 kubelet[2382]: I0114 06:03:47.072554 2382 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 06:03:47.076017 kubelet[2382]: I0114 06:03:47.072639 2382 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 06:03:47.076017 kubelet[2382]: I0114 06:03:47.072651 2382 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 06:03:47.076017 kubelet[2382]: E0114 06:03:47.072889 2382 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 06:03:47.076017 kubelet[2382]: W0114 06:03:47.073536 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:47.076017 kubelet[2382]: E0114 06:03:47.073719 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:47.077000 audit[2413]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.077000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbd582c80 a2=0 a3=0 items=0 ppid=2382 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.079000 audit[2412]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:47.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 06:03:47.079000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe001ceb90 a2=0 a3=0 items=0 ppid=2382 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 06:03:47.084000 audit[2414]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:03:47.084000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff28936d90 a2=0 a3=0 items=0 ppid=2382 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 06:03:47.085000 audit[2415]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:47.085000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8bb6b9d0 a2=0 a3=0 items=0 ppid=2382 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 06:03:47.089000 audit[2416]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:03:47.089000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff07fd26a0 a2=0 a3=0 items=0 ppid=2382 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:47.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 06:03:47.105172 kubelet[2382]: E0114 06:03:47.105120 2382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 06:03:47.174070 kubelet[2382]: E0114 06:03:47.173900 2382 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 06:03:47.205851 kubelet[2382]: E0114 06:03:47.205680 2382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 06:03:47.208078 kubelet[2382]: E0114 06:03:47.207308 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Jan 14 06:03:47.248194 kubelet[2382]: I0114 06:03:47.248078 2382 policy_none.go:49] "None policy: Start" Jan 14 06:03:47.249156 kubelet[2382]: I0114 06:03:47.248339 2382 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 06:03:47.249156 kubelet[2382]: I0114 06:03:47.248385 2382 state_mem.go:35] "Initializing new in-memory state store" Jan 14 06:03:47.281154 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 06:03:47.298379 kubelet[2382]: E0114 06:03:47.298137 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a83b74a6cd015 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 06:03:46.947239957 +0000 UTC m=+0.923853621,LastTimestamp:2026-01-14 06:03:46.947239957 +0000 UTC m=+0.923853621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 06:03:47.305944 kubelet[2382]: E0114 06:03:47.305874 2382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 06:03:47.316345 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 06:03:47.324449 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 06:03:47.345902 kubelet[2382]: I0114 06:03:47.345281 2382 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 06:03:47.345902 kubelet[2382]: I0114 06:03:47.345731 2382 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 06:03:47.345902 kubelet[2382]: I0114 06:03:47.345745 2382 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 06:03:47.346420 kubelet[2382]: I0114 06:03:47.346382 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 06:03:47.349992 kubelet[2382]: E0114 06:03:47.349937 2382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 06:03:47.350194 kubelet[2382]: E0114 06:03:47.350070 2382 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 06:03:47.394749 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 14 06:03:47.404218 kubelet[2382]: I0114 06:03:47.403430 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1511c0ad88f6e9128d47cdad2da07dad-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1511c0ad88f6e9128d47cdad2da07dad\") " pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:47.404218 kubelet[2382]: I0114 06:03:47.403509 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:47.404218 kubelet[2382]: I0114 06:03:47.403536 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:47.404218 kubelet[2382]: I0114 06:03:47.403557 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:47.404218 kubelet[2382]: I0114 06:03:47.403951 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1511c0ad88f6e9128d47cdad2da07dad-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1511c0ad88f6e9128d47cdad2da07dad\") " pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:47.404413 kubelet[2382]: I0114 06:03:47.403981 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1511c0ad88f6e9128d47cdad2da07dad-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1511c0ad88f6e9128d47cdad2da07dad\") " pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:47.404413 kubelet[2382]: I0114 06:03:47.404022 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:47.404413 kubelet[2382]: I0114 06:03:47.404045 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:47.404413 kubelet[2382]: I0114 06:03:47.404068 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:47.421900 kubelet[2382]: E0114 06:03:47.420466 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:47.428946 systemd[1]: Created slice kubepods-burstable-pod1511c0ad88f6e9128d47cdad2da07dad.slice - libcontainer container kubepods-burstable-pod1511c0ad88f6e9128d47cdad2da07dad.slice. Jan 14 06:03:47.445367 kubelet[2382]: E0114 06:03:47.445187 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:47.452712 kubelet[2382]: I0114 06:03:47.451461 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 06:03:47.452712 kubelet[2382]: E0114 06:03:47.452181 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 14 06:03:47.452548 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 14 06:03:47.458274 kubelet[2382]: E0114 06:03:47.458054 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:47.609693 kubelet[2382]: E0114 06:03:47.609330 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Jan 14 06:03:47.656824 kubelet[2382]: I0114 06:03:47.656779 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 06:03:47.657678 kubelet[2382]: E0114 06:03:47.657133 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 14 06:03:47.722660 kubelet[2382]: E0114 06:03:47.721475 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:47.725280 containerd[1601]: time="2026-01-14T06:03:47.723696977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 14 06:03:47.750380 kubelet[2382]: E0114 06:03:47.748834 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:47.750535 containerd[1601]: time="2026-01-14T06:03:47.749269028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1511c0ad88f6e9128d47cdad2da07dad,Namespace:kube-system,Attempt:0,}" Jan 14 06:03:47.760132 kubelet[2382]: E0114 06:03:47.759881 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:47.760500 containerd[1601]: time="2026-01-14T06:03:47.760420149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 14 06:03:47.812214 kubelet[2382]: W0114 06:03:47.810523 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:47.812214 kubelet[2382]: E0114 06:03:47.810691 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:47.822396 containerd[1601]: time="2026-01-14T06:03:47.822082727Z" level=info msg="connecting to shim aff898c222d19b8293a70464cfdd6f582cefa28df06bb0008e9119f454b64fab" address="unix:///run/containerd/s/86ceb7550820b0b97a34bf67d03e494c1881bd894b502ac5f98facfadab433e5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:03:47.884716 kubelet[2382]: W0114 06:03:47.884277 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:47.884716 kubelet[2382]: E0114 06:03:47.884419 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:47.889122 containerd[1601]: time="2026-01-14T06:03:47.889001406Z" level=info msg="connecting to shim 86ec8d4b63702bb085281eb919a865db0bffa5208c117dcebe13b0254ea3961c" address="unix:///run/containerd/s/248c552aa3b14e6d6d7513f1f4bd6dcccf7d176e2a481464f1574893e288ca65" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:03:47.894826 containerd[1601]: time="2026-01-14T06:03:47.894738765Z" level=info msg="connecting to shim a4024da3b6050d61909a101a237299b316cb36c4c1a4b2066b104cb819994d16" address="unix:///run/containerd/s/4848ad088be790cb1271aa54df84d81182ded5bb944c8ad5d662de048ce52ab8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:03:47.960147 systemd[1]: Started cri-containerd-aff898c222d19b8293a70464cfdd6f582cefa28df06bb0008e9119f454b64fab.scope - libcontainer container aff898c222d19b8293a70464cfdd6f582cefa28df06bb0008e9119f454b64fab. Jan 14 06:03:47.974651 systemd[1]: Started cri-containerd-86ec8d4b63702bb085281eb919a865db0bffa5208c117dcebe13b0254ea3961c.scope - libcontainer container 86ec8d4b63702bb085281eb919a865db0bffa5208c117dcebe13b0254ea3961c. Jan 14 06:03:47.995804 systemd[1]: Started cri-containerd-a4024da3b6050d61909a101a237299b316cb36c4c1a4b2066b104cb819994d16.scope - libcontainer container a4024da3b6050d61909a101a237299b316cb36c4c1a4b2066b104cb819994d16. Jan 14 06:03:48.006000 audit: BPF prog-id=83 op=LOAD Jan 14 06:03:48.007000 audit: BPF prog-id=84 op=LOAD Jan 14 06:03:48.007000 audit[2437]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166663839386332323264313962383239336137303436346366646436 Jan 14 06:03:48.008000 audit: BPF prog-id=84 op=UNLOAD Jan 14 06:03:48.008000 audit[2437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166663839386332323264313962383239336137303436346366646436 Jan 14 06:03:48.008000 audit: BPF prog-id=85 op=LOAD Jan 14 06:03:48.008000 audit[2437]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166663839386332323264313962383239336137303436346366646436 Jan 14 06:03:48.008000 audit: BPF prog-id=86 op=LOAD Jan 14 06:03:48.008000 audit[2437]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166663839386332323264313962383239336137303436346366646436 Jan 14 06:03:48.008000 audit: BPF prog-id=86 op=UNLOAD Jan 14 06:03:48.008000 audit[2437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166663839386332323264313962383239336137303436346366646436 Jan 14 06:03:48.008000 audit: BPF prog-id=85 op=UNLOAD Jan 14 06:03:48.008000 audit[2437]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166663839386332323264313962383239336137303436346366646436 Jan 14 06:03:48.009000 audit: BPF prog-id=87 op=LOAD Jan 14 06:03:48.008000 audit: BPF prog-id=88 op=LOAD Jan 14 06:03:48.008000 audit[2437]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166663839386332323264313962383239336137303436346366646436 Jan 14 06:03:48.011000 audit: BPF prog-id=89 op=LOAD Jan 14 06:03:48.011000 audit[2490]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2460 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836656338643462363337303262623038353238316562393139613836 Jan 14 06:03:48.011000 audit: BPF prog-id=89 op=UNLOAD Jan 14 06:03:48.011000 audit[2490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2460 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836656338643462363337303262623038353238316562393139613836 Jan 14 06:03:48.011000 audit: BPF prog-id=90 op=LOAD Jan 14 06:03:48.011000 audit[2490]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2460 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836656338643462363337303262623038353238316562393139613836 Jan 14 06:03:48.011000 audit: BPF prog-id=91 op=LOAD Jan 14 06:03:48.011000 audit[2490]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2460 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836656338643462363337303262623038353238316562393139613836 Jan 14 06:03:48.011000 audit: BPF prog-id=91 op=UNLOAD Jan 14 06:03:48.011000 audit[2490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2460 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836656338643462363337303262623038353238316562393139613836 Jan 14 06:03:48.011000 audit: BPF prog-id=90 op=UNLOAD Jan 14 06:03:48.011000 audit[2490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2460 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836656338643462363337303262623038353238316562393139613836 Jan 14 06:03:48.014000 audit: BPF prog-id=92 op=LOAD Jan 14 06:03:48.014000 audit[2490]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2460 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836656338643462363337303262623038353238316562393139613836 Jan 14 06:03:48.029066 kubelet[2382]: W0114 06:03:48.028949 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:48.029151 kubelet[2382]: E0114 06:03:48.029072 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:48.032000 audit: BPF prog-id=93 op=LOAD Jan 14 06:03:48.033000 audit: BPF prog-id=94 op=LOAD Jan 14 06:03:48.033000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2464 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134303234646133623630353064363139303961313031613233373239 Jan 14 06:03:48.033000 audit: BPF prog-id=94 op=UNLOAD Jan 14 06:03:48.033000 audit[2492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134303234646133623630353064363139303961313031613233373239 Jan 14 06:03:48.034000 audit: BPF prog-id=95 op=LOAD Jan 14 06:03:48.034000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2464 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134303234646133623630353064363139303961313031613233373239 Jan 14 06:03:48.034000 audit: BPF prog-id=96 op=LOAD Jan 14 06:03:48.034000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2464 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134303234646133623630353064363139303961313031613233373239 Jan 14 06:03:48.035000 audit: BPF prog-id=96 op=UNLOAD Jan 14 06:03:48.035000 audit[2492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134303234646133623630353064363139303961313031613233373239 Jan 14 06:03:48.035000 audit: BPF prog-id=95 op=UNLOAD Jan 14 06:03:48.035000 audit[2492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134303234646133623630353064363139303961313031613233373239 Jan 14 06:03:48.035000 audit: BPF prog-id=97 op=LOAD Jan 14 06:03:48.035000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2464 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134303234646133623630353064363139303961313031613233373239 Jan 14 06:03:48.065073 kubelet[2382]: I0114 06:03:48.064986 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 06:03:48.065487 kubelet[2382]: E0114 06:03:48.065395 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 14 06:03:48.102393 containerd[1601]: time="2026-01-14T06:03:48.102290734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"aff898c222d19b8293a70464cfdd6f582cefa28df06bb0008e9119f454b64fab\"" Jan 14 06:03:48.104434 kubelet[2382]: E0114 06:03:48.104312 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:48.111789 containerd[1601]: time="2026-01-14T06:03:48.111441729Z" level=info msg="CreateContainer within sandbox \"aff898c222d19b8293a70464cfdd6f582cefa28df06bb0008e9119f454b64fab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 06:03:48.125322 containerd[1601]: time="2026-01-14T06:03:48.125137234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1511c0ad88f6e9128d47cdad2da07dad,Namespace:kube-system,Attempt:0,} returns sandbox id \"86ec8d4b63702bb085281eb919a865db0bffa5208c117dcebe13b0254ea3961c\"" Jan 14 06:03:48.127623 kubelet[2382]: E0114 06:03:48.127400 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:48.133952 containerd[1601]: time="2026-01-14T06:03:48.133869430Z" level=info msg="CreateContainer within sandbox \"86ec8d4b63702bb085281eb919a865db0bffa5208c117dcebe13b0254ea3961c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 06:03:48.140466 containerd[1601]: time="2026-01-14T06:03:48.140128820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4024da3b6050d61909a101a237299b316cb36c4c1a4b2066b104cb819994d16\"" Jan 14 06:03:48.144219 kubelet[2382]: E0114 06:03:48.144092 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:48.150871 containerd[1601]: time="2026-01-14T06:03:48.148855603Z" level=info msg="CreateContainer within sandbox \"a4024da3b6050d61909a101a237299b316cb36c4c1a4b2066b104cb819994d16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 06:03:48.156079 containerd[1601]: time="2026-01-14T06:03:48.155994170Z" level=info msg="Container c99e4f1e150a1f85f51046a52764980956c00121f2efbe9b0557a3dbcfed2c02: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:03:48.184929 containerd[1601]: time="2026-01-14T06:03:48.184486776Z" level=info msg="Container c0a992d787c31351cc5860be97c930e9cb48df3c5a2fde4f5f9c3c6d3758abe2: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:03:48.195955 containerd[1601]: time="2026-01-14T06:03:48.195824532Z" level=info msg="CreateContainer within sandbox \"aff898c222d19b8293a70464cfdd6f582cefa28df06bb0008e9119f454b64fab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c99e4f1e150a1f85f51046a52764980956c00121f2efbe9b0557a3dbcfed2c02\"" Jan 14 06:03:48.197497 containerd[1601]: time="2026-01-14T06:03:48.197378364Z" level=info msg="StartContainer for \"c99e4f1e150a1f85f51046a52764980956c00121f2efbe9b0557a3dbcfed2c02\"" Jan 14 06:03:48.198127 containerd[1601]: time="2026-01-14T06:03:48.197972508Z" level=info msg="Container b4dd09d3f607cdebf38f1045b42d540ff6ceb67ed9aa11d850c46ebe7bd8f659: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:03:48.199886 containerd[1601]: time="2026-01-14T06:03:48.198963427Z" level=info msg="connecting to shim c99e4f1e150a1f85f51046a52764980956c00121f2efbe9b0557a3dbcfed2c02" address="unix:///run/containerd/s/86ceb7550820b0b97a34bf67d03e494c1881bd894b502ac5f98facfadab433e5" protocol=ttrpc version=3 Jan 14 06:03:48.208656 containerd[1601]: time="2026-01-14T06:03:48.208458107Z" level=info msg="CreateContainer within sandbox \"86ec8d4b63702bb085281eb919a865db0bffa5208c117dcebe13b0254ea3961c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c0a992d787c31351cc5860be97c930e9cb48df3c5a2fde4f5f9c3c6d3758abe2\"" Jan 14 06:03:48.209686 containerd[1601]: time="2026-01-14T06:03:48.209556372Z" level=info msg="StartContainer for \"c0a992d787c31351cc5860be97c930e9cb48df3c5a2fde4f5f9c3c6d3758abe2\"" Jan 14 06:03:48.211535 containerd[1601]: time="2026-01-14T06:03:48.211461767Z" level=info msg="connecting to shim c0a992d787c31351cc5860be97c930e9cb48df3c5a2fde4f5f9c3c6d3758abe2" address="unix:///run/containerd/s/248c552aa3b14e6d6d7513f1f4bd6dcccf7d176e2a481464f1574893e288ca65" protocol=ttrpc version=3 Jan 14 06:03:48.225019 containerd[1601]: time="2026-01-14T06:03:48.224911736Z" level=info msg="CreateContainer within sandbox \"a4024da3b6050d61909a101a237299b316cb36c4c1a4b2066b104cb819994d16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4dd09d3f607cdebf38f1045b42d540ff6ceb67ed9aa11d850c46ebe7bd8f659\"" Jan 14 06:03:48.227984 containerd[1601]: time="2026-01-14T06:03:48.227955489Z" level=info msg="StartContainer for \"b4dd09d3f607cdebf38f1045b42d540ff6ceb67ed9aa11d850c46ebe7bd8f659\"" Jan 14 06:03:48.232553 containerd[1601]: time="2026-01-14T06:03:48.232431530Z" level=info msg="connecting to shim b4dd09d3f607cdebf38f1045b42d540ff6ceb67ed9aa11d850c46ebe7bd8f659" address="unix:///run/containerd/s/4848ad088be790cb1271aa54df84d81182ded5bb944c8ad5d662de048ce52ab8" protocol=ttrpc version=3 Jan 14 06:03:48.260926 systemd[1]: Started cri-containerd-c0a992d787c31351cc5860be97c930e9cb48df3c5a2fde4f5f9c3c6d3758abe2.scope - libcontainer container c0a992d787c31351cc5860be97c930e9cb48df3c5a2fde4f5f9c3c6d3758abe2. Jan 14 06:03:48.267135 systemd[1]: Started cri-containerd-c99e4f1e150a1f85f51046a52764980956c00121f2efbe9b0557a3dbcfed2c02.scope - libcontainer container c99e4f1e150a1f85f51046a52764980956c00121f2efbe9b0557a3dbcfed2c02. Jan 14 06:03:48.289217 systemd[1]: Started cri-containerd-b4dd09d3f607cdebf38f1045b42d540ff6ceb67ed9aa11d850c46ebe7bd8f659.scope - libcontainer container b4dd09d3f607cdebf38f1045b42d540ff6ceb67ed9aa11d850c46ebe7bd8f659. Jan 14 06:03:48.307000 audit: BPF prog-id=98 op=LOAD Jan 14 06:03:48.309000 audit: BPF prog-id=99 op=LOAD Jan 14 06:03:48.309000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2460 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613939326437383763333133353163633538363062653937633933 Jan 14 06:03:48.309000 audit: BPF prog-id=99 op=UNLOAD Jan 14 06:03:48.309000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2460 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613939326437383763333133353163633538363062653937633933 Jan 14 06:03:48.309000 audit: BPF prog-id=100 op=LOAD Jan 14 06:03:48.309000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2460 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613939326437383763333133353163633538363062653937633933 Jan 14 06:03:48.309000 audit: BPF prog-id=101 op=LOAD Jan 14 06:03:48.309000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2460 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613939326437383763333133353163633538363062653937633933 Jan 14 06:03:48.309000 audit: BPF prog-id=101 op=UNLOAD Jan 14 06:03:48.309000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2460 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613939326437383763333133353163633538363062653937633933 Jan 14 06:03:48.309000 audit: BPF prog-id=100 op=UNLOAD Jan 14 06:03:48.309000 audit[2568]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2460 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613939326437383763333133353163633538363062653937633933 Jan 14 06:03:48.320420 kubelet[2382]: W0114 06:03:48.320142 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 14 06:03:48.320420 kubelet[2382]: E0114 06:03:48.320386 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:03:48.317000 audit: BPF prog-id=102 op=LOAD Jan 14 06:03:48.317000 audit[2568]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2460 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613939326437383763333133353163633538363062653937633933 Jan 14 06:03:48.323000 audit: BPF prog-id=103 op=LOAD Jan 14 06:03:48.329000 audit: BPF prog-id=104 op=LOAD Jan 14 06:03:48.329000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2424 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396534663165313530613166383566353130343661353237363439 Jan 14 06:03:48.329000 audit: BPF prog-id=104 op=UNLOAD Jan 14 06:03:48.329000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2424 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396534663165313530613166383566353130343661353237363439 Jan 14 06:03:48.330000 audit: BPF prog-id=105 op=LOAD Jan 14 06:03:48.330000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2424 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396534663165313530613166383566353130343661353237363439 Jan 14 06:03:48.330000 audit: BPF prog-id=106 op=LOAD Jan 14 06:03:48.330000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2424 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396534663165313530613166383566353130343661353237363439 Jan 14 06:03:48.330000 audit: BPF prog-id=106 op=UNLOAD Jan 14 06:03:48.330000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2424 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396534663165313530613166383566353130343661353237363439 Jan 14 06:03:48.330000 audit: BPF prog-id=105 op=UNLOAD Jan 14 06:03:48.330000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2424 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396534663165313530613166383566353130343661353237363439 Jan 14 06:03:48.331000 audit: BPF prog-id=107 op=LOAD Jan 14 06:03:48.330000 audit: BPF prog-id=108 op=LOAD Jan 14 06:03:48.330000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2424 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396534663165313530613166383566353130343661353237363439 Jan 14 06:03:48.333000 audit: BPF prog-id=109 op=LOAD Jan 14 06:03:48.333000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2464 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234646430396433663630376364656266333866313034356234326435 Jan 14 06:03:48.333000 audit: BPF prog-id=109 op=UNLOAD Jan 14 06:03:48.333000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234646430396433663630376364656266333866313034356234326435 Jan 14 06:03:48.333000 audit: BPF prog-id=110 op=LOAD Jan 14 06:03:48.333000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2464 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234646430396433663630376364656266333866313034356234326435 Jan 14 06:03:48.334000 audit: BPF prog-id=111 op=LOAD Jan 14 06:03:48.334000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2464 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234646430396433663630376364656266333866313034356234326435 Jan 14 06:03:48.334000 audit: BPF prog-id=111 op=UNLOAD Jan 14 06:03:48.334000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234646430396433663630376364656266333866313034356234326435 Jan 14 06:03:48.334000 audit: BPF prog-id=110 op=UNLOAD Jan 14 06:03:48.334000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234646430396433663630376364656266333866313034356234326435 Jan 14 06:03:48.335000 audit: BPF prog-id=112 op=LOAD Jan 14 06:03:48.335000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2464 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:03:48.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234646430396433663630376364656266333866313034356234326435 Jan 14 06:03:48.411420 kubelet[2382]: E0114 06:03:48.411387 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="1.6s" Jan 14 06:03:48.452639 containerd[1601]: time="2026-01-14T06:03:48.450913820Z" level=info msg="StartContainer for \"b4dd09d3f607cdebf38f1045b42d540ff6ceb67ed9aa11d850c46ebe7bd8f659\" returns successfully" Jan 14 06:03:48.470225 containerd[1601]: time="2026-01-14T06:03:48.470029564Z" level=info msg="StartContainer for \"c0a992d787c31351cc5860be97c930e9cb48df3c5a2fde4f5f9c3c6d3758abe2\" returns successfully" Jan 14 06:03:48.487131 containerd[1601]: time="2026-01-14T06:03:48.486978844Z" level=info msg="StartContainer for \"c99e4f1e150a1f85f51046a52764980956c00121f2efbe9b0557a3dbcfed2c02\" returns successfully" Jan 14 06:03:48.867386 kubelet[2382]: I0114 06:03:48.867214 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 06:03:49.095631 kubelet[2382]: E0114 06:03:49.095361 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:49.098281 kubelet[2382]: E0114 06:03:49.098029 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:49.107958 kubelet[2382]: E0114 06:03:49.105747 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:49.108264 kubelet[2382]: E0114 06:03:49.108242 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:49.111128 kubelet[2382]: E0114 06:03:49.111106 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:49.112443 kubelet[2382]: E0114 06:03:49.112306 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:50.116353 kubelet[2382]: E0114 06:03:50.116087 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:50.118996 kubelet[2382]: E0114 06:03:50.117499 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:50.118996 kubelet[2382]: E0114 06:03:50.118758 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:50.118996 kubelet[2382]: E0114 06:03:50.118944 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:50.818223 kubelet[2382]: E0114 06:03:50.813418 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:50.818223 kubelet[2382]: E0114 06:03:50.818092 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:51.705735 kubelet[2382]: E0114 06:03:51.705533 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 06:03:51.706348 kubelet[2382]: E0114 06:03:51.705824 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:51.791058 kubelet[2382]: E0114 06:03:51.790983 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 06:03:51.915020 kubelet[2382]: I0114 06:03:51.914765 2382 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 06:03:51.944958 kubelet[2382]: I0114 06:03:51.944862 2382 apiserver.go:52] "Watching apiserver" Jan 14 06:03:51.993102 kubelet[2382]: I0114 06:03:51.992908 2382 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 06:03:52.005351 kubelet[2382]: I0114 06:03:52.005217 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:52.035747 kubelet[2382]: E0114 06:03:52.035128 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:52.035747 kubelet[2382]: I0114 06:03:52.035170 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:52.038262 kubelet[2382]: E0114 06:03:52.037995 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:52.038262 kubelet[2382]: I0114 06:03:52.038068 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:52.040503 kubelet[2382]: E0114 06:03:52.040402 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:55.776322 kubelet[2382]: I0114 06:03:55.776164 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:55.801817 kubelet[2382]: E0114 06:03:55.801411 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:56.140875 kubelet[2382]: E0114 06:03:56.140079 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:56.516750 systemd[1]: Reload requested from client PID 2665 ('systemctl') (unit session-8.scope)... Jan 14 06:03:56.517263 systemd[1]: Reloading... Jan 14 06:03:56.668723 zram_generator::config[2714]: No configuration found. Jan 14 06:03:57.001859 systemd[1]: Reloading finished in 483 ms. Jan 14 06:03:57.036138 kubelet[2382]: I0114 06:03:57.035936 2382 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 06:03:57.036153 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:03:57.060256 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 06:03:57.060842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:03:57.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:57.060971 systemd[1]: kubelet.service: Consumed 1.841s CPU time, 133.3M memory peak. Jan 14 06:03:57.064426 kernel: kauditd_printk_skb: 201 callbacks suppressed Jan 14 06:03:57.064512 kernel: audit: type=1131 audit(1768370637.060:387): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:57.065498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:03:57.082673 kernel: audit: type=1334 audit(1768370637.069:388): prog-id=113 op=LOAD Jan 14 06:03:57.082765 kernel: audit: type=1334 audit(1768370637.069:389): prog-id=64 op=UNLOAD Jan 14 06:03:57.069000 audit: BPF prog-id=113 op=LOAD Jan 14 06:03:57.069000 audit: BPF prog-id=64 op=UNLOAD Jan 14 06:03:57.073000 audit: BPF prog-id=114 op=LOAD Jan 14 06:03:57.088011 kernel: audit: type=1334 audit(1768370637.073:390): prog-id=114 op=LOAD Jan 14 06:03:57.088073 kernel: audit: type=1334 audit(1768370637.073:391): prog-id=68 op=UNLOAD Jan 14 06:03:57.073000 audit: BPF prog-id=68 op=UNLOAD Jan 14 06:03:57.074000 audit: BPF prog-id=115 op=LOAD Jan 14 06:03:57.094500 kernel: audit: type=1334 audit(1768370637.074:392): prog-id=115 op=LOAD Jan 14 06:03:57.094665 kernel: audit: type=1334 audit(1768370637.074:393): prog-id=116 op=LOAD Jan 14 06:03:57.074000 audit: BPF prog-id=116 op=LOAD Jan 14 06:03:57.074000 audit: BPF prog-id=69 op=UNLOAD Jan 14 06:03:57.101995 kernel: audit: type=1334 audit(1768370637.074:394): prog-id=69 op=UNLOAD Jan 14 06:03:57.102070 kernel: audit: type=1334 audit(1768370637.074:395): prog-id=70 op=UNLOAD Jan 14 06:03:57.074000 audit: BPF prog-id=70 op=UNLOAD Jan 14 06:03:57.077000 audit: BPF prog-id=117 op=LOAD Jan 14 06:03:57.111042 kernel: audit: type=1334 audit(1768370637.077:396): prog-id=117 op=LOAD Jan 14 06:03:57.077000 audit: BPF prog-id=65 op=UNLOAD Jan 14 06:03:57.077000 audit: BPF prog-id=118 op=LOAD Jan 14 06:03:57.077000 audit: BPF prog-id=119 op=LOAD Jan 14 06:03:57.077000 audit: BPF prog-id=66 op=UNLOAD Jan 14 06:03:57.077000 audit: BPF prog-id=67 op=UNLOAD Jan 14 06:03:57.078000 audit: BPF prog-id=120 op=LOAD Jan 14 06:03:57.078000 audit: BPF prog-id=79 op=UNLOAD Jan 14 06:03:57.079000 audit: BPF prog-id=121 op=LOAD Jan 14 06:03:57.079000 audit: BPF prog-id=122 op=LOAD Jan 14 06:03:57.079000 audit: BPF prog-id=80 op=UNLOAD Jan 14 06:03:57.079000 audit: BPF prog-id=81 op=UNLOAD Jan 14 06:03:57.080000 audit: BPF prog-id=123 op=LOAD Jan 14 06:03:57.080000 audit: BPF prog-id=71 op=UNLOAD Jan 14 06:03:57.080000 audit: BPF prog-id=124 op=LOAD Jan 14 06:03:57.080000 audit: BPF prog-id=125 op=LOAD Jan 14 06:03:57.080000 audit: BPF prog-id=72 op=UNLOAD Jan 14 06:03:57.080000 audit: BPF prog-id=73 op=UNLOAD Jan 14 06:03:57.081000 audit: BPF prog-id=126 op=LOAD Jan 14 06:03:57.098000 audit: BPF prog-id=82 op=UNLOAD Jan 14 06:03:57.100000 audit: BPF prog-id=127 op=LOAD Jan 14 06:03:57.100000 audit: BPF prog-id=63 op=UNLOAD Jan 14 06:03:57.104000 audit: BPF prog-id=128 op=LOAD Jan 14 06:03:57.104000 audit: BPF prog-id=74 op=UNLOAD Jan 14 06:03:57.104000 audit: BPF prog-id=129 op=LOAD Jan 14 06:03:57.104000 audit: BPF prog-id=130 op=LOAD Jan 14 06:03:57.104000 audit: BPF prog-id=75 op=UNLOAD Jan 14 06:03:57.104000 audit: BPF prog-id=76 op=UNLOAD Jan 14 06:03:57.105000 audit: BPF prog-id=131 op=LOAD Jan 14 06:03:57.105000 audit: BPF prog-id=132 op=LOAD Jan 14 06:03:57.106000 audit: BPF prog-id=77 op=UNLOAD Jan 14 06:03:57.106000 audit: BPF prog-id=78 op=UNLOAD Jan 14 06:03:57.355420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:03:57.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:03:57.371143 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 06:03:57.466845 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:03:57.466845 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 06:03:57.466845 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:03:57.466845 kubelet[2756]: I0114 06:03:57.466843 2756 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 06:03:57.476303 kubelet[2756]: I0114 06:03:57.476199 2756 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 06:03:57.476303 kubelet[2756]: I0114 06:03:57.476266 2756 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 06:03:57.476559 kubelet[2756]: I0114 06:03:57.476527 2756 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 06:03:57.478307 kubelet[2756]: I0114 06:03:57.478088 2756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 06:03:57.483096 kubelet[2756]: I0114 06:03:57.482876 2756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 06:03:57.491459 kubelet[2756]: I0114 06:03:57.491405 2756 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 06:03:57.499011 kubelet[2756]: I0114 06:03:57.498916 2756 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 06:03:57.499420 kubelet[2756]: I0114 06:03:57.499327 2756 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 06:03:57.499631 kubelet[2756]: I0114 06:03:57.499392 2756 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 06:03:57.499760 kubelet[2756]: I0114 06:03:57.499653 2756 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 06:03:57.499760 kubelet[2756]: I0114 06:03:57.499666 2756 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 06:03:57.499760 kubelet[2756]: I0114 06:03:57.499754 2756 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:03:57.500038 kubelet[2756]: I0114 06:03:57.499979 2756 kubelet.go:446] "Attempting to sync node with API server" Jan 14 06:03:57.500038 kubelet[2756]: I0114 06:03:57.500027 2756 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 06:03:57.500086 kubelet[2756]: I0114 06:03:57.500048 2756 kubelet.go:352] "Adding apiserver pod source" Jan 14 06:03:57.500086 kubelet[2756]: I0114 06:03:57.500058 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 06:03:57.502626 kubelet[2756]: I0114 06:03:57.501016 2756 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 06:03:57.502626 kubelet[2756]: I0114 06:03:57.501420 2756 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 06:03:57.502626 kubelet[2756]: I0114 06:03:57.502286 2756 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 06:03:57.502626 kubelet[2756]: I0114 06:03:57.502317 2756 server.go:1287] "Started kubelet" Jan 14 06:03:57.502626 kubelet[2756]: I0114 06:03:57.502385 2756 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 06:03:57.504076 kubelet[2756]: I0114 06:03:57.503885 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 06:03:57.504334 kubelet[2756]: I0114 06:03:57.504250 2756 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 06:03:57.507980 kubelet[2756]: I0114 06:03:57.507960 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 06:03:57.508363 kubelet[2756]: I0114 06:03:57.508346 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 06:03:57.513528 kubelet[2756]: I0114 06:03:57.513500 2756 server.go:479] "Adding debug handlers to kubelet server" Jan 14 06:03:57.517560 kubelet[2756]: I0114 06:03:57.513852 2756 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 06:03:57.522809 kubelet[2756]: I0114 06:03:57.513918 2756 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 06:03:57.522809 kubelet[2756]: E0114 06:03:57.514218 2756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 06:03:57.522809 kubelet[2756]: E0114 06:03:57.520954 2756 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 06:03:57.522915 kubelet[2756]: I0114 06:03:57.521268 2756 factory.go:221] Registration of the systemd container factory successfully Jan 14 06:03:57.523107 kubelet[2756]: I0114 06:03:57.523011 2756 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 06:03:57.523651 kubelet[2756]: I0114 06:03:57.523481 2756 reconciler.go:26] "Reconciler: start to sync state" Jan 14 06:03:57.530098 kubelet[2756]: I0114 06:03:57.529983 2756 factory.go:221] Registration of the containerd container factory successfully Jan 14 06:03:57.550890 kubelet[2756]: I0114 06:03:57.550787 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 06:03:57.559230 kubelet[2756]: I0114 06:03:57.557173 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 06:03:57.559230 kubelet[2756]: I0114 06:03:57.557211 2756 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 06:03:57.559230 kubelet[2756]: I0114 06:03:57.557236 2756 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 06:03:57.559230 kubelet[2756]: I0114 06:03:57.557246 2756 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 06:03:57.559230 kubelet[2756]: E0114 06:03:57.557320 2756 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622040 2756 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622069 2756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622091 2756 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622411 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622424 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622452 2756 policy_none.go:49] "None policy: Start" Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622471 2756 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 06:03:57.622505 kubelet[2756]: I0114 06:03:57.622490 2756 state_mem.go:35] "Initializing new in-memory state store" Jan 14 06:03:57.625293 kubelet[2756]: I0114 06:03:57.622709 2756 state_mem.go:75] "Updated machine memory state" Jan 14 06:03:57.632157 kubelet[2756]: I0114 06:03:57.632034 2756 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 06:03:57.632354 kubelet[2756]: I0114 06:03:57.632286 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 06:03:57.632442 kubelet[2756]: I0114 06:03:57.632358 2756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 06:03:57.638790 kubelet[2756]: E0114 06:03:57.638505 2756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 06:03:57.644931 kubelet[2756]: I0114 06:03:57.644336 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 06:03:57.661950 kubelet[2756]: I0114 06:03:57.661111 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:57.662433 kubelet[2756]: I0114 06:03:57.662416 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:57.663322 kubelet[2756]: I0114 06:03:57.663298 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:57.702471 kubelet[2756]: E0114 06:03:57.702261 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:57.726480 kubelet[2756]: I0114 06:03:57.726433 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:57.726480 kubelet[2756]: I0114 06:03:57.726477 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:57.726480 kubelet[2756]: I0114 06:03:57.726503 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:57.727365 kubelet[2756]: I0114 06:03:57.726535 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:57.727365 kubelet[2756]: I0114 06:03:57.727114 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:57.727365 kubelet[2756]: I0114 06:03:57.727199 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1511c0ad88f6e9128d47cdad2da07dad-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1511c0ad88f6e9128d47cdad2da07dad\") " pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:57.727365 kubelet[2756]: I0114 06:03:57.727302 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1511c0ad88f6e9128d47cdad2da07dad-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1511c0ad88f6e9128d47cdad2da07dad\") " pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:57.727365 kubelet[2756]: I0114 06:03:57.727326 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1511c0ad88f6e9128d47cdad2da07dad-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1511c0ad88f6e9128d47cdad2da07dad\") " pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:57.727702 kubelet[2756]: I0114 06:03:57.727350 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 06:03:57.768285 kubelet[2756]: I0114 06:03:57.768236 2756 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 06:03:57.813264 kubelet[2756]: I0114 06:03:57.812145 2756 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 06:03:57.813264 kubelet[2756]: I0114 06:03:57.812234 2756 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 06:03:57.999352 kubelet[2756]: E0114 06:03:57.999237 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:58.003269 kubelet[2756]: E0114 06:03:58.003149 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:58.006854 kubelet[2756]: E0114 06:03:58.006765 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:58.501376 kubelet[2756]: I0114 06:03:58.501251 2756 apiserver.go:52] "Watching apiserver" Jan 14 06:03:58.523469 kubelet[2756]: I0114 06:03:58.523392 2756 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 06:03:58.596309 kubelet[2756]: I0114 06:03:58.596269 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:58.597166 kubelet[2756]: E0114 06:03:58.596344 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:58.597166 kubelet[2756]: I0114 06:03:58.596859 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:58.625882 kubelet[2756]: E0114 06:03:58.625791 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 06:03:58.626074 kubelet[2756]: E0114 06:03:58.626005 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:58.630220 kubelet[2756]: E0114 06:03:58.629911 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 06:03:58.630220 kubelet[2756]: E0114 06:03:58.630058 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:58.650936 kubelet[2756]: I0114 06:03:58.650822 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6508027969999999 podStartE2EDuration="1.650802797s" podCreationTimestamp="2026-01-14 06:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:03:58.650475738 +0000 UTC m=+1.271160168" watchObservedRunningTime="2026-01-14 06:03:58.650802797 +0000 UTC m=+1.271487248" Jan 14 06:03:58.687035 kubelet[2756]: I0114 06:03:58.686918 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.686453383 podStartE2EDuration="3.686453383s" podCreationTimestamp="2026-01-14 06:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:03:58.68594168 +0000 UTC m=+1.306626131" watchObservedRunningTime="2026-01-14 06:03:58.686453383 +0000 UTC m=+1.307137814" Jan 14 06:03:59.599186 kubelet[2756]: E0114 06:03:59.599040 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:03:59.599186 kubelet[2756]: E0114 06:03:59.599152 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:00.756322 kubelet[2756]: I0114 06:04:00.756019 2756 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 06:04:00.757217 containerd[1601]: time="2026-01-14T06:04:00.757104872Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 06:04:00.758256 kubelet[2756]: I0114 06:04:00.757469 2756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 06:04:01.482702 kubelet[2756]: I0114 06:04:01.480963 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.48094406 podStartE2EDuration="4.48094406s" podCreationTimestamp="2026-01-14 06:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:03:58.743245704 +0000 UTC m=+1.363930166" watchObservedRunningTime="2026-01-14 06:04:01.48094406 +0000 UTC m=+4.101628492" Jan 14 06:04:01.491805 kubelet[2756]: W0114 06:04:01.491539 2756 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 14 06:04:01.491805 kubelet[2756]: E0114 06:04:01.491780 2756 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 14 06:04:01.499950 systemd[1]: Created slice kubepods-besteffort-pode90204c7_8080_478a_9d4c_64dcf4e08309.slice - libcontainer container kubepods-besteffort-pode90204c7_8080_478a_9d4c_64dcf4e08309.slice. Jan 14 06:04:01.556412 kubelet[2756]: I0114 06:04:01.556220 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e90204c7-8080-478a-9d4c-64dcf4e08309-kube-proxy\") pod \"kube-proxy-q42mj\" (UID: \"e90204c7-8080-478a-9d4c-64dcf4e08309\") " pod="kube-system/kube-proxy-q42mj" Jan 14 06:04:01.556412 kubelet[2756]: I0114 06:04:01.556326 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90204c7-8080-478a-9d4c-64dcf4e08309-xtables-lock\") pod \"kube-proxy-q42mj\" (UID: \"e90204c7-8080-478a-9d4c-64dcf4e08309\") " pod="kube-system/kube-proxy-q42mj" Jan 14 06:04:01.556412 kubelet[2756]: I0114 06:04:01.556355 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27lmw\" (UniqueName: \"kubernetes.io/projected/e90204c7-8080-478a-9d4c-64dcf4e08309-kube-api-access-27lmw\") pod \"kube-proxy-q42mj\" (UID: \"e90204c7-8080-478a-9d4c-64dcf4e08309\") " pod="kube-system/kube-proxy-q42mj" Jan 14 06:04:01.556412 kubelet[2756]: I0114 06:04:01.556393 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90204c7-8080-478a-9d4c-64dcf4e08309-lib-modules\") pod \"kube-proxy-q42mj\" (UID: \"e90204c7-8080-478a-9d4c-64dcf4e08309\") " pod="kube-system/kube-proxy-q42mj" Jan 14 06:04:01.915325 systemd[1]: Created slice kubepods-besteffort-pod55ed591f_7752_4459_a8fc_1c5162bab374.slice - libcontainer container kubepods-besteffort-pod55ed591f_7752_4459_a8fc_1c5162bab374.slice. Jan 14 06:04:01.959243 kubelet[2756]: I0114 06:04:01.958309 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs97d\" (UniqueName: \"kubernetes.io/projected/55ed591f-7752-4459-a8fc-1c5162bab374-kube-api-access-rs97d\") pod \"tigera-operator-7dcd859c48-ff6nn\" (UID: \"55ed591f-7752-4459-a8fc-1c5162bab374\") " pod="tigera-operator/tigera-operator-7dcd859c48-ff6nn" Jan 14 06:04:01.959243 kubelet[2756]: I0114 06:04:01.958414 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55ed591f-7752-4459-a8fc-1c5162bab374-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ff6nn\" (UID: \"55ed591f-7752-4459-a8fc-1c5162bab374\") " pod="tigera-operator/tigera-operator-7dcd859c48-ff6nn" Jan 14 06:04:02.098234 kubelet[2756]: E0114 06:04:02.097694 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:02.223546 containerd[1601]: time="2026-01-14T06:04:02.222862196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ff6nn,Uid:55ed591f-7752-4459-a8fc-1c5162bab374,Namespace:tigera-operator,Attempt:0,}" Jan 14 06:04:02.347297 containerd[1601]: time="2026-01-14T06:04:02.346514790Z" level=info msg="connecting to shim fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87" address="unix:///run/containerd/s/d3e92b346fe059164f6af80718cad9889cb1d1c94fa12166236db163f838ae73" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:02.498110 systemd[1]: Started cri-containerd-fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87.scope - libcontainer container fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87. Jan 14 06:04:02.537702 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 06:04:02.537849 kernel: audit: type=1334 audit(1768370642.527:429): prog-id=133 op=LOAD Jan 14 06:04:02.527000 audit: BPF prog-id=133 op=LOAD Jan 14 06:04:02.528000 audit: BPF prog-id=134 op=LOAD Jan 14 06:04:02.528000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.564228 kernel: audit: type=1334 audit(1768370642.528:430): prog-id=134 op=LOAD Jan 14 06:04:02.564421 kernel: audit: type=1300 audit(1768370642.528:430): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.585924 kernel: audit: type=1327 audit(1768370642.528:430): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.586068 kernel: audit: type=1334 audit(1768370642.528:431): prog-id=134 op=UNLOAD Jan 14 06:04:02.528000 audit: BPF prog-id=134 op=UNLOAD Jan 14 06:04:02.590032 kernel: audit: type=1300 audit(1768370642.528:431): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.528000 audit[2833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.608742 kernel: audit: type=1327 audit(1768370642.528:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.619457 kubelet[2756]: E0114 06:04:02.619257 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:02.623132 kernel: audit: type=1334 audit(1768370642.529:432): prog-id=135 op=LOAD Jan 14 06:04:02.529000 audit: BPF prog-id=135 op=LOAD Jan 14 06:04:02.625773 kernel: audit: type=1300 audit(1768370642.529:432): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.529000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.644536 kernel: audit: type=1327 audit(1768370642.529:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.658102 kubelet[2756]: E0114 06:04:02.657729 2756 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 14 06:04:02.661739 kubelet[2756]: E0114 06:04:02.661026 2756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e90204c7-8080-478a-9d4c-64dcf4e08309-kube-proxy podName:e90204c7-8080-478a-9d4c-64dcf4e08309 nodeName:}" failed. No retries permitted until 2026-01-14 06:04:03.160238336 +0000 UTC m=+5.780922798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e90204c7-8080-478a-9d4c-64dcf4e08309-kube-proxy") pod "kube-proxy-q42mj" (UID: "e90204c7-8080-478a-9d4c-64dcf4e08309") : failed to sync configmap cache: timed out waiting for the condition Jan 14 06:04:02.529000 audit: BPF prog-id=136 op=LOAD Jan 14 06:04:02.529000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.529000 audit: BPF prog-id=136 op=UNLOAD Jan 14 06:04:02.529000 audit[2833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.529000 audit: BPF prog-id=135 op=UNLOAD Jan 14 06:04:02.529000 audit[2833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.529000 audit: BPF prog-id=137 op=LOAD Jan 14 06:04:02.529000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2821 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:02.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653530653765643131393933306461656335646461393966663236 Jan 14 06:04:02.672071 containerd[1601]: time="2026-01-14T06:04:02.669833573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ff6nn,Uid:55ed591f-7752-4459-a8fc-1c5162bab374,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87\"" Jan 14 06:04:02.677747 containerd[1601]: time="2026-01-14T06:04:02.677511935Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 06:04:03.319500 kubelet[2756]: E0114 06:04:03.319353 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:03.321030 containerd[1601]: time="2026-01-14T06:04:03.320663938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q42mj,Uid:e90204c7-8080-478a-9d4c-64dcf4e08309,Namespace:kube-system,Attempt:0,}" Jan 14 06:04:03.397753 containerd[1601]: time="2026-01-14T06:04:03.397520585Z" level=info msg="connecting to shim ebe463d1001932b3792ee07b4c3e5889e4621d38c5c2e8522efa5a143b146882" address="unix:///run/containerd/s/c08a86967ae1a2a33f25317ce65d14eb85d1f2f62534ab876de37f10f48ad26a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:03.461235 systemd[1]: Started cri-containerd-ebe463d1001932b3792ee07b4c3e5889e4621d38c5c2e8522efa5a143b146882.scope - libcontainer container ebe463d1001932b3792ee07b4c3e5889e4621d38c5c2e8522efa5a143b146882. Jan 14 06:04:03.492000 audit: BPF prog-id=138 op=LOAD Jan 14 06:04:03.493000 audit: BPF prog-id=139 op=LOAD Jan 14 06:04:03.493000 audit[2880]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2868 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562653436336431303031393332623337393265653037623463336535 Jan 14 06:04:03.493000 audit: BPF prog-id=139 op=UNLOAD Jan 14 06:04:03.493000 audit[2880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2868 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562653436336431303031393332623337393265653037623463336535 Jan 14 06:04:03.494000 audit: BPF prog-id=140 op=LOAD Jan 14 06:04:03.494000 audit[2880]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2868 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562653436336431303031393332623337393265653037623463336535 Jan 14 06:04:03.494000 audit: BPF prog-id=141 op=LOAD Jan 14 06:04:03.494000 audit[2880]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2868 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562653436336431303031393332623337393265653037623463336535 Jan 14 06:04:03.494000 audit: BPF prog-id=141 op=UNLOAD Jan 14 06:04:03.494000 audit[2880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2868 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562653436336431303031393332623337393265653037623463336535 Jan 14 06:04:03.494000 audit: BPF prog-id=140 op=UNLOAD Jan 14 06:04:03.494000 audit[2880]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2868 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562653436336431303031393332623337393265653037623463336535 Jan 14 06:04:03.494000 audit: BPF prog-id=142 op=LOAD Jan 14 06:04:03.494000 audit[2880]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2868 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562653436336431303031393332623337393265653037623463336535 Jan 14 06:04:03.540544 containerd[1601]: time="2026-01-14T06:04:03.540458088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q42mj,Uid:e90204c7-8080-478a-9d4c-64dcf4e08309,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebe463d1001932b3792ee07b4c3e5889e4621d38c5c2e8522efa5a143b146882\"" Jan 14 06:04:03.542481 kubelet[2756]: E0114 06:04:03.542368 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:03.547883 containerd[1601]: time="2026-01-14T06:04:03.547803941Z" level=info msg="CreateContainer within sandbox \"ebe463d1001932b3792ee07b4c3e5889e4621d38c5c2e8522efa5a143b146882\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 06:04:03.584470 containerd[1601]: time="2026-01-14T06:04:03.584105303Z" level=info msg="Container 1a2c1c4230b75792d9b504f0b5ecf2a9304bda2c672cb76a155ba3d9d7458f89: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:03.604103 containerd[1601]: time="2026-01-14T06:04:03.603916624Z" level=info msg="CreateContainer within sandbox \"ebe463d1001932b3792ee07b4c3e5889e4621d38c5c2e8522efa5a143b146882\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a2c1c4230b75792d9b504f0b5ecf2a9304bda2c672cb76a155ba3d9d7458f89\"" Jan 14 06:04:03.605429 containerd[1601]: time="2026-01-14T06:04:03.605364445Z" level=info msg="StartContainer for \"1a2c1c4230b75792d9b504f0b5ecf2a9304bda2c672cb76a155ba3d9d7458f89\"" Jan 14 06:04:03.608450 containerd[1601]: time="2026-01-14T06:04:03.608354177Z" level=info msg="connecting to shim 1a2c1c4230b75792d9b504f0b5ecf2a9304bda2c672cb76a155ba3d9d7458f89" address="unix:///run/containerd/s/c08a86967ae1a2a33f25317ce65d14eb85d1f2f62534ab876de37f10f48ad26a" protocol=ttrpc version=3 Jan 14 06:04:03.661208 systemd[1]: Started cri-containerd-1a2c1c4230b75792d9b504f0b5ecf2a9304bda2c672cb76a155ba3d9d7458f89.scope - libcontainer container 1a2c1c4230b75792d9b504f0b5ecf2a9304bda2c672cb76a155ba3d9d7458f89. Jan 14 06:04:03.706396 kubelet[2756]: E0114 06:04:03.705989 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:03.779000 audit: BPF prog-id=143 op=LOAD Jan 14 06:04:03.779000 audit[2905]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2868 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161326331633432333062373537393264396235303466306235656366 Jan 14 06:04:03.779000 audit: BPF prog-id=144 op=LOAD Jan 14 06:04:03.779000 audit[2905]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2868 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161326331633432333062373537393264396235303466306235656366 Jan 14 06:04:03.780000 audit: BPF prog-id=144 op=UNLOAD Jan 14 06:04:03.780000 audit[2905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2868 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161326331633432333062373537393264396235303466306235656366 Jan 14 06:04:03.780000 audit: BPF prog-id=143 op=UNLOAD Jan 14 06:04:03.780000 audit[2905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2868 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161326331633432333062373537393264396235303466306235656366 Jan 14 06:04:03.780000 audit: BPF prog-id=145 op=LOAD Jan 14 06:04:03.780000 audit[2905]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2868 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:03.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161326331633432333062373537393264396235303466306235656366 Jan 14 06:04:03.840953 containerd[1601]: time="2026-01-14T06:04:03.840727564Z" level=info msg="StartContainer for \"1a2c1c4230b75792d9b504f0b5ecf2a9304bda2c672cb76a155ba3d9d7458f89\" returns successfully" Jan 14 06:04:04.147000 audit[2972]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.147000 audit[2972]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc81ca05b0 a2=0 a3=7ffc81ca059c items=0 ppid=2918 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 06:04:04.149000 audit[2973]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.149000 audit[2973]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee3634250 a2=0 a3=7ffee363423c items=0 ppid=2918 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 06:04:04.153000 audit[2975]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.153000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7286dae0 a2=0 a3=7ffd7286dacc items=0 ppid=2918 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 06:04:04.156000 audit[2976]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.156000 audit[2976]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec5c8a8f0 a2=0 a3=7ffec5c8a8dc items=0 ppid=2918 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 06:04:04.157000 audit[2974]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.157000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd49762d40 a2=0 a3=7ffd49762d2c items=0 ppid=2918 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 06:04:04.171000 audit[2979]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.171000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9778b4e0 a2=0 a3=7ffd9778b4cc items=0 ppid=2918 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.171000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 06:04:04.229554 kubelet[2756]: E0114 06:04:04.229114 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:04.266000 audit[2980]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.266000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe99f09290 a2=0 a3=7ffe99f0927c items=0 ppid=2918 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 06:04:04.280000 audit[2982]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.280000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc86efccd0 a2=0 a3=7ffc86efccbc items=0 ppid=2918 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 06:04:04.294000 audit[2985]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.294000 audit[2985]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd9d89a9b0 a2=0 a3=7ffd9d89a99c items=0 ppid=2918 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.294000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 06:04:04.299000 audit[2986]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.299000 audit[2986]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef2a34060 a2=0 a3=7ffef2a3404c items=0 ppid=2918 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.299000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 06:04:04.307000 audit[2988]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.307000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc71e73d50 a2=0 a3=7ffc71e73d3c items=0 ppid=2918 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 06:04:04.311000 audit[2989]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.311000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdba7b1f0 a2=0 a3=7fffdba7b1dc items=0 ppid=2918 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 06:04:04.320000 audit[2991]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.320000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff078207a0 a2=0 a3=7fff0782078c items=0 ppid=2918 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 06:04:04.335000 audit[2994]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.335000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff55002a40 a2=0 a3=7fff55002a2c items=0 ppid=2918 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 06:04:04.339000 audit[2995]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.339000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde004bb60 a2=0 a3=7ffde004bb4c items=0 ppid=2918 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 06:04:04.346000 audit[2997]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.346000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcea671210 a2=0 a3=7ffcea6711fc items=0 ppid=2918 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 06:04:04.350000 audit[2998]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.350000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe859a6cf0 a2=0 a3=7ffe859a6cdc items=0 ppid=2918 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 06:04:04.360000 audit[3000]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.360000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd146fd870 a2=0 a3=7ffd146fd85c items=0 ppid=2918 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:04:04.375000 audit[3003]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.375000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6b8dd800 a2=0 a3=7ffe6b8dd7ec items=0 ppid=2918 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:04:04.387000 audit[3006]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.387000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd34009310 a2=0 a3=7ffd340092fc items=0 ppid=2918 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.387000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 06:04:04.390000 audit[3007]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.390000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdd8997130 a2=0 a3=7ffdd899711c items=0 ppid=2918 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 06:04:04.399000 audit[3009]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.399000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc6ad63310 a2=0 a3=7ffc6ad632fc items=0 ppid=2918 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:04:04.412000 audit[3012]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.412000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc8e14a10 a2=0 a3=7ffdc8e149fc items=0 ppid=2918 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:04:04.415000 audit[3013]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.415000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff587653f0 a2=0 a3=7fff587653dc items=0 ppid=2918 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 06:04:04.424000 audit[3015]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:04:04.424000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcc9a7e950 a2=0 a3=7ffcc9a7e93c items=0 ppid=2918 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 06:04:04.475000 audit[3021]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:04.475000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde8594520 a2=0 a3=7ffde859450c items=0 ppid=2918 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:04.492000 audit[3021]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:04.492000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffde8594520 a2=0 a3=7ffde859450c items=0 ppid=2918 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:04.496000 audit[3026]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.496000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd432870f0 a2=0 a3=7ffd432870dc items=0 ppid=2918 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 06:04:04.503000 audit[3028]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.503000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdbba6dab0 a2=0 a3=7ffdbba6da9c items=0 ppid=2918 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 06:04:04.514000 audit[3031]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.514000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe88580b10 a2=0 a3=7ffe88580afc items=0 ppid=2918 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 06:04:04.519000 audit[3032]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.519000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff590ba6e0 a2=0 a3=7fff590ba6cc items=0 ppid=2918 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 06:04:04.527000 audit[3034]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.527000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe670cc6b0 a2=0 a3=7ffe670cc69c items=0 ppid=2918 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 06:04:04.531000 audit[3035]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.531000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9ce5ad30 a2=0 a3=7ffe9ce5ad1c items=0 ppid=2918 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 06:04:04.539000 audit[3037]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.539000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4ea32e10 a2=0 a3=7ffd4ea32dfc items=0 ppid=2918 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 06:04:04.550000 audit[3040]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.550000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffea8d318a0 a2=0 a3=7ffea8d3188c items=0 ppid=2918 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 06:04:04.553000 audit[3041]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.553000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc56b76340 a2=0 a3=7ffc56b7632c items=0 ppid=2918 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 06:04:04.561000 audit[3043]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.561000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe17dffa20 a2=0 a3=7ffe17dffa0c items=0 ppid=2918 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 06:04:04.566000 audit[3044]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.566000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd72061b50 a2=0 a3=7ffd72061b3c items=0 ppid=2918 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 06:04:04.575000 audit[3046]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.575000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe80505020 a2=0 a3=7ffe8050500c items=0 ppid=2918 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:04:04.586000 audit[3049]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.586000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd937eeea0 a2=0 a3=7ffd937eee8c items=0 ppid=2918 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 06:04:04.597000 audit[3052]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.597000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2ffc8f10 a2=0 a3=7ffd2ffc8efc items=0 ppid=2918 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 06:04:04.600000 audit[3053]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.600000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef62c1ad0 a2=0 a3=7ffef62c1abc items=0 ppid=2918 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 06:04:04.608000 audit[3055]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.608000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdab334190 a2=0 a3=7ffdab33417c items=0 ppid=2918 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.608000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:04:04.618000 audit[3058]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.618000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefbcb16a0 a2=0 a3=7ffefbcb168c items=0 ppid=2918 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.618000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:04:04.622000 audit[3059]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.622000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe8ba56a0 a2=0 a3=7fffe8ba568c items=0 ppid=2918 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 06:04:04.629000 audit[3061]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.629000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffcd3fd600 a2=0 a3=7fffcd3fd5ec items=0 ppid=2918 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 06:04:04.633000 audit[3062]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.633000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7e2f85b0 a2=0 a3=7ffc7e2f859c items=0 ppid=2918 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 06:04:04.638690 kubelet[2756]: E0114 06:04:04.638425 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:04.638690 kubelet[2756]: E0114 06:04:04.638458 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:04.639104 kubelet[2756]: E0114 06:04:04.638915 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:04.642000 audit[3067]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.642000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde9b9e7b0 a2=0 a3=7ffde9b9e79c items=0 ppid=2918 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:04:04.653000 audit[3071]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:04:04.653000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffddc61260 a2=0 a3=7fffddc6124c items=0 ppid=2918 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.653000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:04:04.662000 audit[3073]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 06:04:04.662000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe426e4110 a2=0 a3=7ffe426e40fc items=0 ppid=2918 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.662000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:04.664000 audit[3073]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 06:04:04.664000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe426e4110 a2=0 a3=7ffe426e40fc items=0 ppid=2918 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:04.664000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:04.788107 kubelet[2756]: I0114 06:04:04.787971 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q42mj" podStartSLOduration=3.787948579 podStartE2EDuration="3.787948579s" podCreationTimestamp="2026-01-14 06:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:04:04.725103127 +0000 UTC m=+7.345787579" watchObservedRunningTime="2026-01-14 06:04:04.787948579 +0000 UTC m=+7.408633010" Jan 14 06:04:05.327223 containerd[1601]: time="2026-01-14T06:04:05.327133248Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:05.329815 containerd[1601]: time="2026-01-14T06:04:05.329492007Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23559564" Jan 14 06:04:05.331795 containerd[1601]: time="2026-01-14T06:04:05.331415170Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:05.335063 containerd[1601]: time="2026-01-14T06:04:05.334801995Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:05.335773 containerd[1601]: time="2026-01-14T06:04:05.335659652Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.657700254s" Jan 14 06:04:05.335773 containerd[1601]: time="2026-01-14T06:04:05.335719075Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 06:04:05.339797 containerd[1601]: time="2026-01-14T06:04:05.339740231Z" level=info msg="CreateContainer within sandbox \"fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 06:04:05.352896 containerd[1601]: time="2026-01-14T06:04:05.352791935Z" level=info msg="Container e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:05.363729 containerd[1601]: time="2026-01-14T06:04:05.363552962Z" level=info msg="CreateContainer within sandbox \"fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e\"" Jan 14 06:04:05.365058 containerd[1601]: time="2026-01-14T06:04:05.364970787Z" level=info msg="StartContainer for \"e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e\"" Jan 14 06:04:05.366251 containerd[1601]: time="2026-01-14T06:04:05.366109829Z" level=info msg="connecting to shim e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e" address="unix:///run/containerd/s/d3e92b346fe059164f6af80718cad9889cb1d1c94fa12166236db163f838ae73" protocol=ttrpc version=3 Jan 14 06:04:05.408054 systemd[1]: Started cri-containerd-e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e.scope - libcontainer container e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e. Jan 14 06:04:05.425000 audit: BPF prog-id=146 op=LOAD Jan 14 06:04:05.426000 audit: BPF prog-id=147 op=LOAD Jan 14 06:04:05.426000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2821 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:05.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326634613363646437353365653034373639376130343763363030 Jan 14 06:04:05.426000 audit: BPF prog-id=147 op=UNLOAD Jan 14 06:04:05.426000 audit[3074]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:05.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326634613363646437353365653034373639376130343763363030 Jan 14 06:04:05.426000 audit: BPF prog-id=148 op=LOAD Jan 14 06:04:05.426000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2821 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:05.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326634613363646437353365653034373639376130343763363030 Jan 14 06:04:05.427000 audit: BPF prog-id=149 op=LOAD Jan 14 06:04:05.427000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2821 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:05.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326634613363646437353365653034373639376130343763363030 Jan 14 06:04:05.427000 audit: BPF prog-id=149 op=UNLOAD Jan 14 06:04:05.427000 audit[3074]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:05.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326634613363646437353365653034373639376130343763363030 Jan 14 06:04:05.427000 audit: BPF prog-id=148 op=UNLOAD Jan 14 06:04:05.427000 audit[3074]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:05.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326634613363646437353365653034373639376130343763363030 Jan 14 06:04:05.427000 audit: BPF prog-id=150 op=LOAD Jan 14 06:04:05.427000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2821 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:05.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536326634613363646437353365653034373639376130343763363030 Jan 14 06:04:05.456469 containerd[1601]: time="2026-01-14T06:04:05.456321050Z" level=info msg="StartContainer for \"e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e\" returns successfully" Jan 14 06:04:05.648317 kubelet[2756]: E0114 06:04:05.647421 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:05.878397 update_engine[1572]: I20260114 06:04:05.878197 1572 update_attempter.cc:509] Updating boot flags... Jan 14 06:04:10.036047 systemd[1]: cri-containerd-e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e.scope: Deactivated successfully. Jan 14 06:04:10.047035 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 06:04:10.047123 kernel: audit: type=1334 audit(1768370650.040:509): prog-id=146 op=UNLOAD Jan 14 06:04:10.040000 audit: BPF prog-id=146 op=UNLOAD Jan 14 06:04:10.048336 containerd[1601]: time="2026-01-14T06:04:10.048060923Z" level=info msg="received container exit event container_id:\"e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e\" id:\"e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e\" pid:3087 exit_status:1 exited_at:{seconds:1768370650 nanos:41696366}" Jan 14 06:04:10.060674 kernel: audit: type=1334 audit(1768370650.040:510): prog-id=150 op=UNLOAD Jan 14 06:04:10.040000 audit: BPF prog-id=150 op=UNLOAD Jan 14 06:04:10.168993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e-rootfs.mount: Deactivated successfully. Jan 14 06:04:10.665847 kubelet[2756]: I0114 06:04:10.665755 2756 scope.go:117] "RemoveContainer" containerID="e62f4a3cdd753ee047697a047c600d131ca8875286e8ed4da038f057f4203e5e" Jan 14 06:04:10.669979 containerd[1601]: time="2026-01-14T06:04:10.669934268Z" level=info msg="CreateContainer within sandbox \"fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 06:04:10.699253 containerd[1601]: time="2026-01-14T06:04:10.699145810Z" level=info msg="Container ed9a7c3ffa86f3dc99eb7fb86d4d41c6587dc06008d0439e63034f65753f2e9d: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:10.702530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867342498.mount: Deactivated successfully. Jan 14 06:04:10.717204 containerd[1601]: time="2026-01-14T06:04:10.717111482Z" level=info msg="CreateContainer within sandbox \"fee50e7ed119930daec5dda99ff26bd9ffe8a969617b70c80d288ea353277d87\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ed9a7c3ffa86f3dc99eb7fb86d4d41c6587dc06008d0439e63034f65753f2e9d\"" Jan 14 06:04:10.717948 containerd[1601]: time="2026-01-14T06:04:10.717920246Z" level=info msg="StartContainer for \"ed9a7c3ffa86f3dc99eb7fb86d4d41c6587dc06008d0439e63034f65753f2e9d\"" Jan 14 06:04:10.719182 containerd[1601]: time="2026-01-14T06:04:10.719118163Z" level=info msg="connecting to shim ed9a7c3ffa86f3dc99eb7fb86d4d41c6587dc06008d0439e63034f65753f2e9d" address="unix:///run/containerd/s/d3e92b346fe059164f6af80718cad9889cb1d1c94fa12166236db163f838ae73" protocol=ttrpc version=3 Jan 14 06:04:10.757955 systemd[1]: Started cri-containerd-ed9a7c3ffa86f3dc99eb7fb86d4d41c6587dc06008d0439e63034f65753f2e9d.scope - libcontainer container ed9a7c3ffa86f3dc99eb7fb86d4d41c6587dc06008d0439e63034f65753f2e9d. Jan 14 06:04:10.790719 kernel: audit: type=1334 audit(1768370650.784:511): prog-id=151 op=LOAD Jan 14 06:04:10.784000 audit: BPF prog-id=151 op=LOAD Jan 14 06:04:10.790000 audit: BPF prog-id=152 op=LOAD Jan 14 06:04:10.790000 audit[3169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.817549 kernel: audit: type=1334 audit(1768370650.790:512): prog-id=152 op=LOAD Jan 14 06:04:10.844175 kernel: audit: type=1300 audit(1768370650.790:512): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.844294 kernel: audit: type=1327 audit(1768370650.790:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.793000 audit: BPF prog-id=152 op=UNLOAD Jan 14 06:04:10.855784 kernel: audit: type=1334 audit(1768370650.793:513): prog-id=152 op=UNLOAD Jan 14 06:04:10.793000 audit[3169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.881743 kernel: audit: type=1300 audit(1768370650.793:513): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.882342 kernel: audit: type=1327 audit(1768370650.793:513): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.793000 audit: BPF prog-id=153 op=LOAD Jan 14 06:04:10.908260 kernel: audit: type=1334 audit(1768370650.793:514): prog-id=153 op=LOAD Jan 14 06:04:10.793000 audit[3169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.793000 audit: BPF prog-id=154 op=LOAD Jan 14 06:04:10.793000 audit[3169]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.793000 audit: BPF prog-id=154 op=UNLOAD Jan 14 06:04:10.793000 audit[3169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.793000 audit: BPF prog-id=153 op=UNLOAD Jan 14 06:04:10.793000 audit[3169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.793000 audit: BPF prog-id=155 op=LOAD Jan 14 06:04:10.793000 audit[3169]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2821 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:10.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564396137633366666138366633646339396562376662383664346434 Jan 14 06:04:10.946767 containerd[1601]: time="2026-01-14T06:04:10.946429523Z" level=info msg="StartContainer for \"ed9a7c3ffa86f3dc99eb7fb86d4d41c6587dc06008d0439e63034f65753f2e9d\" returns successfully" Jan 14 06:04:11.699301 kubelet[2756]: I0114 06:04:11.698803 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ff6nn" podStartSLOduration=8.036143361 podStartE2EDuration="10.698783456s" podCreationTimestamp="2026-01-14 06:04:01 +0000 UTC" firstStartedPulling="2026-01-14 06:04:02.67513832 +0000 UTC m=+5.295822751" lastFinishedPulling="2026-01-14 06:04:05.337778416 +0000 UTC m=+7.958462846" observedRunningTime="2026-01-14 06:04:05.69238415 +0000 UTC m=+8.313068580" watchObservedRunningTime="2026-01-14 06:04:11.698783456 +0000 UTC m=+14.319467886" Jan 14 06:04:13.429775 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 14 06:04:13.429000 audit[1817]: USER_END pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:04:13.429000 audit[1817]: CRED_DISP pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:04:13.434914 sshd[1816]: Connection closed by 10.0.0.1 port 46032 Jan 14 06:04:13.435761 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 14 06:04:13.438000 audit[1812]: USER_END pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:04:13.438000 audit[1812]: CRED_DISP pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:04:13.443825 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:46032.service: Deactivated successfully. Jan 14 06:04:13.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:46032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:04:13.448409 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 06:04:13.449018 systemd[1]: session-8.scope: Consumed 6.680s CPU time, 216.8M memory peak. Jan 14 06:04:13.451810 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Jan 14 06:04:13.453966 systemd-logind[1568]: Removed session 8. Jan 14 06:04:15.734713 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 06:04:15.734836 kernel: audit: type=1325 audit(1768370655.729:524): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.729000 audit[3229]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.729000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff253f0cb0 a2=0 a3=7fff253f0c9c items=0 ppid=2918 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:15.756112 kernel: audit: type=1300 audit(1768370655.729:524): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff253f0cb0 a2=0 a3=7fff253f0c9c items=0 ppid=2918 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:15.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:15.763340 kernel: audit: type=1327 audit(1768370655.729:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:15.758000 audit[3229]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.786251 kernel: audit: type=1325 audit(1768370655.758:525): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.786490 kernel: audit: type=1300 audit(1768370655.758:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff253f0cb0 a2=0 a3=0 items=0 ppid=2918 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:15.758000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff253f0cb0 a2=0 a3=0 items=0 ppid=2918 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:15.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:15.794734 kernel: audit: type=1327 audit(1768370655.758:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:15.827000 audit[3231]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.827000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffef461e820 a2=0 a3=7ffef461e80c items=0 ppid=2918 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:15.851154 kernel: audit: type=1325 audit(1768370655.827:526): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.851243 kernel: audit: type=1300 audit(1768370655.827:526): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffef461e820 a2=0 a3=7ffef461e80c items=0 ppid=2918 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:15.851269 kernel: audit: type=1327 audit(1768370655.827:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:15.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:15.860000 audit[3231]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.868685 kernel: audit: type=1325 audit(1768370655.860:527): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:15.860000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef461e820 a2=0 a3=0 items=0 ppid=2918 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:15.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:18.679000 audit[3235]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:18.679000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff007f71d0 a2=0 a3=7fff007f71bc items=0 ppid=2918 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:18.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:18.688000 audit[3235]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:18.688000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff007f71d0 a2=0 a3=0 items=0 ppid=2918 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:18.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:18.711000 audit[3237]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:18.711000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd72fb2fa0 a2=0 a3=7ffd72fb2f8c items=0 ppid=2918 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:18.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:18.719000 audit[3237]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:18.719000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd72fb2fa0 a2=0 a3=0 items=0 ppid=2918 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:18.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:19.756000 audit[3239]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:19.756000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc839c5180 a2=0 a3=7ffc839c516c items=0 ppid=2918 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:19.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:19.763000 audit[3239]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:19.763000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc839c5180 a2=0 a3=0 items=0 ppid=2918 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:19.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:20.359364 systemd[1]: Created slice kubepods-besteffort-pod56cce3cc_3189_4d5b_a576_f4da7afae3b6.slice - libcontainer container kubepods-besteffort-pod56cce3cc_3189_4d5b_a576_f4da7afae3b6.slice. Jan 14 06:04:20.448194 kubelet[2756]: I0114 06:04:20.448051 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56cce3cc-3189-4d5b-a576-f4da7afae3b6-tigera-ca-bundle\") pod \"calico-typha-5864b68dc-lrt75\" (UID: \"56cce3cc-3189-4d5b-a576-f4da7afae3b6\") " pod="calico-system/calico-typha-5864b68dc-lrt75" Jan 14 06:04:20.448194 kubelet[2756]: I0114 06:04:20.448119 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxnx5\" (UniqueName: \"kubernetes.io/projected/56cce3cc-3189-4d5b-a576-f4da7afae3b6-kube-api-access-jxnx5\") pod \"calico-typha-5864b68dc-lrt75\" (UID: \"56cce3cc-3189-4d5b-a576-f4da7afae3b6\") " pod="calico-system/calico-typha-5864b68dc-lrt75" Jan 14 06:04:20.448194 kubelet[2756]: I0114 06:04:20.448144 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56cce3cc-3189-4d5b-a576-f4da7afae3b6-typha-certs\") pod \"calico-typha-5864b68dc-lrt75\" (UID: \"56cce3cc-3189-4d5b-a576-f4da7afae3b6\") " pod="calico-system/calico-typha-5864b68dc-lrt75" Jan 14 06:04:20.519219 systemd[1]: Created slice kubepods-besteffort-pod11fe8048_e517_4a59_8256_f9c9075d1e74.slice - libcontainer container kubepods-besteffort-pod11fe8048_e517_4a59_8256_f9c9075d1e74.slice. Jan 14 06:04:20.549383 kubelet[2756]: I0114 06:04:20.549237 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-flexvol-driver-host\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549383 kubelet[2756]: I0114 06:04:20.549317 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-var-lib-calico\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549383 kubelet[2756]: I0114 06:04:20.549338 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-lib-modules\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549383 kubelet[2756]: I0114 06:04:20.549354 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-cni-net-dir\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549383 kubelet[2756]: I0114 06:04:20.549367 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-cni-bin-dir\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549830 kubelet[2756]: I0114 06:04:20.549380 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-var-run-calico\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549830 kubelet[2756]: I0114 06:04:20.549394 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhpn\" (UniqueName: \"kubernetes.io/projected/11fe8048-e517-4a59-8256-f9c9075d1e74-kube-api-access-7vhpn\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549830 kubelet[2756]: I0114 06:04:20.549426 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11fe8048-e517-4a59-8256-f9c9075d1e74-tigera-ca-bundle\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549830 kubelet[2756]: I0114 06:04:20.549503 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/11fe8048-e517-4a59-8256-f9c9075d1e74-node-certs\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.549830 kubelet[2756]: I0114 06:04:20.549538 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-policysync\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.550330 kubelet[2756]: I0114 06:04:20.549558 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-cni-log-dir\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.550330 kubelet[2756]: I0114 06:04:20.549667 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11fe8048-e517-4a59-8256-f9c9075d1e74-xtables-lock\") pod \"calico-node-tq48t\" (UID: \"11fe8048-e517-4a59-8256-f9c9075d1e74\") " pod="calico-system/calico-node-tq48t" Jan 14 06:04:20.652950 kubelet[2756]: E0114 06:04:20.652773 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.652950 kubelet[2756]: W0114 06:04:20.652844 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.652950 kubelet[2756]: E0114 06:04:20.652888 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.654173 kubelet[2756]: E0114 06:04:20.653790 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.654173 kubelet[2756]: W0114 06:04:20.653807 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.654173 kubelet[2756]: E0114 06:04:20.654102 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.654804 kubelet[2756]: E0114 06:04:20.654532 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.654804 kubelet[2756]: W0114 06:04:20.654742 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.654887 kubelet[2756]: E0114 06:04:20.654836 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.655717 kubelet[2756]: E0114 06:04:20.655690 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.655717 kubelet[2756]: W0114 06:04:20.655704 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.656081 kubelet[2756]: E0114 06:04:20.655841 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.658425 kubelet[2756]: E0114 06:04:20.658346 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.658425 kubelet[2756]: W0114 06:04:20.658408 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.658761 kubelet[2756]: E0114 06:04:20.658521 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.659342 kubelet[2756]: E0114 06:04:20.659281 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.659342 kubelet[2756]: W0114 06:04:20.659335 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.659815 kubelet[2756]: E0114 06:04:20.659689 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.660162 kubelet[2756]: E0114 06:04:20.660106 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.660162 kubelet[2756]: W0114 06:04:20.660157 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.660657 kubelet[2756]: E0114 06:04:20.660379 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.661784 kubelet[2756]: E0114 06:04:20.661707 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.663725 kubelet[2756]: W0114 06:04:20.661931 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.664396 kubelet[2756]: E0114 06:04:20.664312 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.664396 kubelet[2756]: W0114 06:04:20.664375 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.664970 kubelet[2756]: E0114 06:04:20.664820 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.665507 kubelet[2756]: E0114 06:04:20.665160 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.665507 kubelet[2756]: E0114 06:04:20.665164 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.665866 kubelet[2756]: W0114 06:04:20.665174 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.666663 kubelet[2756]: E0114 06:04:20.665874 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.666816 kubelet[2756]: E0114 06:04:20.666135 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.666949 kubelet[2756]: W0114 06:04:20.666880 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.667716 kubelet[2756]: E0114 06:04:20.667313 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.670008 kubelet[2756]: E0114 06:04:20.669804 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:20.670502 kubelet[2756]: E0114 06:04:20.670395 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.670502 kubelet[2756]: W0114 06:04:20.670406 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.671338 kubelet[2756]: E0114 06:04:20.670906 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.671338 kubelet[2756]: W0114 06:04:20.670919 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.671338 kubelet[2756]: E0114 06:04:20.671249 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.671338 kubelet[2756]: E0114 06:04:20.671298 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.672794 kubelet[2756]: E0114 06:04:20.671470 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.672794 kubelet[2756]: W0114 06:04:20.671481 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.672794 kubelet[2756]: E0114 06:04:20.671493 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.672904 kubelet[2756]: E0114 06:04:20.672884 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.672904 kubelet[2756]: W0114 06:04:20.672896 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.672964 kubelet[2756]: E0114 06:04:20.672908 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.673348 containerd[1601]: time="2026-01-14T06:04:20.673304142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5864b68dc-lrt75,Uid:56cce3cc-3189-4d5b-a576-f4da7afae3b6,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:20.673915 kubelet[2756]: E0114 06:04:20.673707 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.673915 kubelet[2756]: W0114 06:04:20.673721 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.673915 kubelet[2756]: E0114 06:04:20.673743 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.675651 kubelet[2756]: E0114 06:04:20.674356 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.675651 kubelet[2756]: W0114 06:04:20.674372 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.675758 kubelet[2756]: E0114 06:04:20.675715 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.677159 kubelet[2756]: E0114 06:04:20.676524 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.677159 kubelet[2756]: W0114 06:04:20.676882 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.677159 kubelet[2756]: E0114 06:04:20.676898 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.683906 kubelet[2756]: E0114 06:04:20.683758 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.683906 kubelet[2756]: W0114 06:04:20.683818 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.683906 kubelet[2756]: E0114 06:04:20.683844 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.684801 kubelet[2756]: E0114 06:04:20.684782 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.684897 kubelet[2756]: W0114 06:04:20.684878 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.684967 kubelet[2756]: E0114 06:04:20.684952 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.729712 containerd[1601]: time="2026-01-14T06:04:20.726906470Z" level=info msg="connecting to shim 993a639339a9cdb4a5e3c0a401e6e3e6649a906697269795cf576de476844931" address="unix:///run/containerd/s/8bf897b04872fef58959c0c53e7efe84b405c4f79db8c5c76fbe685e0d17ca86" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:20.748706 kubelet[2756]: E0114 06:04:20.748346 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:20.805278 systemd[1]: Started cri-containerd-993a639339a9cdb4a5e3c0a401e6e3e6649a906697269795cf576de476844931.scope - libcontainer container 993a639339a9cdb4a5e3c0a401e6e3e6649a906697269795cf576de476844931. Jan 14 06:04:20.820425 kubelet[2756]: E0114 06:04:20.820344 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.820425 kubelet[2756]: W0114 06:04:20.820414 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.820536 kubelet[2756]: E0114 06:04:20.820439 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.822175 kubelet[2756]: E0114 06:04:20.822029 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.822339 kubelet[2756]: W0114 06:04:20.822248 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.823664 kubelet[2756]: E0114 06:04:20.823421 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.825214 kubelet[2756]: E0114 06:04:20.825182 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.825427 kubelet[2756]: W0114 06:04:20.825198 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.825427 kubelet[2756]: E0114 06:04:20.825368 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.831687 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 14 06:04:20.831779 kernel: audit: type=1325 audit(1768370660.825:534): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:20.825000 audit[3309]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:20.831992 kubelet[2756]: E0114 06:04:20.828864 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.831992 kubelet[2756]: W0114 06:04:20.828877 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.831992 kubelet[2756]: E0114 06:04:20.828972 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.831992 kubelet[2756]: E0114 06:04:20.830410 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.831992 kubelet[2756]: W0114 06:04:20.830421 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.831992 kubelet[2756]: E0114 06:04:20.830435 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.831992 kubelet[2756]: E0114 06:04:20.830972 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.831992 kubelet[2756]: W0114 06:04:20.830987 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.831992 kubelet[2756]: E0114 06:04:20.830999 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.831992 kubelet[2756]: E0114 06:04:20.831495 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.832439 kubelet[2756]: W0114 06:04:20.831507 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.832439 kubelet[2756]: E0114 06:04:20.831519 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.832439 kubelet[2756]: E0114 06:04:20.831882 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.832439 kubelet[2756]: W0114 06:04:20.831897 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.832439 kubelet[2756]: E0114 06:04:20.831909 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.832439 kubelet[2756]: E0114 06:04:20.832329 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:20.833880 containerd[1601]: time="2026-01-14T06:04:20.833400791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tq48t,Uid:11fe8048-e517-4a59-8256-f9c9075d1e74,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:20.834283 kubelet[2756]: E0114 06:04:20.834004 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.834283 kubelet[2756]: W0114 06:04:20.834066 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.834283 kubelet[2756]: E0114 06:04:20.834132 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.834854 kubelet[2756]: E0114 06:04:20.834412 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.834854 kubelet[2756]: W0114 06:04:20.834423 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.834854 kubelet[2756]: E0114 06:04:20.834434 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.834854 kubelet[2756]: E0114 06:04:20.834796 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.834854 kubelet[2756]: W0114 06:04:20.834808 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.834854 kubelet[2756]: E0114 06:04:20.834821 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.835935 kubelet[2756]: E0114 06:04:20.835899 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.835935 kubelet[2756]: W0114 06:04:20.835914 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.835935 kubelet[2756]: E0114 06:04:20.835928 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.836796 kubelet[2756]: E0114 06:04:20.836769 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.836796 kubelet[2756]: W0114 06:04:20.836786 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.836965 kubelet[2756]: E0114 06:04:20.836800 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.837225 kubelet[2756]: E0114 06:04:20.837054 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.837225 kubelet[2756]: W0114 06:04:20.837065 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.837225 kubelet[2756]: E0114 06:04:20.837079 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.837855 kubelet[2756]: E0114 06:04:20.837835 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.837855 kubelet[2756]: W0114 06:04:20.837849 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.837934 kubelet[2756]: E0114 06:04:20.837863 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.838456 kubelet[2756]: E0114 06:04:20.838367 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.838456 kubelet[2756]: W0114 06:04:20.838428 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.838456 kubelet[2756]: E0114 06:04:20.838442 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.840249 kubelet[2756]: E0114 06:04:20.840219 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.840249 kubelet[2756]: W0114 06:04:20.840235 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.840249 kubelet[2756]: E0114 06:04:20.840247 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.825000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcf6181c40 a2=0 a3=7ffcf6181c2c items=0 ppid=2918 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.841783 kubelet[2756]: E0114 06:04:20.840506 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.841783 kubelet[2756]: W0114 06:04:20.840517 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.841783 kubelet[2756]: E0114 06:04:20.840528 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.842435 kubelet[2756]: E0114 06:04:20.842239 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.842435 kubelet[2756]: W0114 06:04:20.842253 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.842435 kubelet[2756]: E0114 06:04:20.842265 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.842679 kubelet[2756]: E0114 06:04:20.842670 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.842738 kubelet[2756]: W0114 06:04:20.842682 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.842738 kubelet[2756]: E0114 06:04:20.842693 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.855558 kubelet[2756]: E0114 06:04:20.855473 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.855558 kubelet[2756]: W0114 06:04:20.855541 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.855558 kubelet[2756]: E0114 06:04:20.855643 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.856221 kubelet[2756]: I0114 06:04:20.856147 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7z8\" (UniqueName: \"kubernetes.io/projected/1fd1f2cb-320b-495b-b1a9-bd981c71562f-kube-api-access-lp7z8\") pod \"csi-node-driver-sgth2\" (UID: \"1fd1f2cb-320b-495b-b1a9-bd981c71562f\") " pod="calico-system/csi-node-driver-sgth2" Jan 14 06:04:20.856280 kubelet[2756]: E0114 06:04:20.856244 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.856280 kubelet[2756]: W0114 06:04:20.856254 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.856280 kubelet[2756]: E0114 06:04:20.856272 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.857057 kubelet[2756]: E0114 06:04:20.856766 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.857057 kubelet[2756]: W0114 06:04:20.856781 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.857645 kubelet[2756]: E0114 06:04:20.856849 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.859786 kubelet[2756]: E0114 06:04:20.859715 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.859786 kubelet[2756]: W0114 06:04:20.859777 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.859876 kubelet[2756]: E0114 06:04:20.859799 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.860157 kubelet[2756]: I0114 06:04:20.860024 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fd1f2cb-320b-495b-b1a9-bd981c71562f-socket-dir\") pod \"csi-node-driver-sgth2\" (UID: \"1fd1f2cb-320b-495b-b1a9-bd981c71562f\") " pod="calico-system/csi-node-driver-sgth2" Jan 14 06:04:20.862226 kubelet[2756]: E0114 06:04:20.861747 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.862226 kubelet[2756]: W0114 06:04:20.861765 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.862226 kubelet[2756]: E0114 06:04:20.861847 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.862717 kubelet[2756]: E0114 06:04:20.862556 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.862792 kubelet[2756]: W0114 06:04:20.862741 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.863868 kubelet[2756]: E0114 06:04:20.862813 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.863868 kubelet[2756]: I0114 06:04:20.863352 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1fd1f2cb-320b-495b-b1a9-bd981c71562f-varrun\") pod \"csi-node-driver-sgth2\" (UID: \"1fd1f2cb-320b-495b-b1a9-bd981c71562f\") " pod="calico-system/csi-node-driver-sgth2" Jan 14 06:04:20.863868 kubelet[2756]: E0114 06:04:20.863421 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.863868 kubelet[2756]: W0114 06:04:20.863431 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.863868 kubelet[2756]: E0114 06:04:20.863442 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.864068 kubelet[2756]: E0114 06:04:20.863914 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.864068 kubelet[2756]: W0114 06:04:20.863923 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.864068 kubelet[2756]: E0114 06:04:20.863984 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.865070 kubelet[2756]: E0114 06:04:20.864461 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.865070 kubelet[2756]: W0114 06:04:20.864501 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.865070 kubelet[2756]: E0114 06:04:20.864876 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:20.872794 kubelet[2756]: E0114 06:04:20.865968 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.872794 kubelet[2756]: W0114 06:04:20.865979 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.872794 kubelet[2756]: E0114 06:04:20.865989 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.872794 kubelet[2756]: I0114 06:04:20.866006 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fd1f2cb-320b-495b-b1a9-bd981c71562f-kubelet-dir\") pod \"csi-node-driver-sgth2\" (UID: \"1fd1f2cb-320b-495b-b1a9-bd981c71562f\") " pod="calico-system/csi-node-driver-sgth2" Jan 14 06:04:20.872794 kubelet[2756]: E0114 06:04:20.866411 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.872794 kubelet[2756]: W0114 06:04:20.866424 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.872794 kubelet[2756]: E0114 06:04:20.866906 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.872794 kubelet[2756]: I0114 06:04:20.867088 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fd1f2cb-320b-495b-b1a9-bd981c71562f-registration-dir\") pod \"csi-node-driver-sgth2\" (UID: \"1fd1f2cb-320b-495b-b1a9-bd981c71562f\") " pod="calico-system/csi-node-driver-sgth2" Jan 14 06:04:20.872794 kubelet[2756]: E0114 06:04:20.867227 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.873206 kubelet[2756]: W0114 06:04:20.867238 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.873206 kubelet[2756]: E0114 06:04:20.867254 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.873206 kubelet[2756]: E0114 06:04:20.868268 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.873206 kubelet[2756]: W0114 06:04:20.868285 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.873206 kubelet[2756]: E0114 06:04:20.868299 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.873206 kubelet[2756]: E0114 06:04:20.868830 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.873206 kubelet[2756]: W0114 06:04:20.868840 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.873206 kubelet[2756]: E0114 06:04:20.868854 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.873206 kubelet[2756]: E0114 06:04:20.870516 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.873206 kubelet[2756]: W0114 06:04:20.870527 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.873814 kubelet[2756]: E0114 06:04:20.870539 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.876942 kernel: audit: type=1300 audit(1768370660.825:534): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcf6181c40 a2=0 a3=7ffcf6181c2c items=0 ppid=2918 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.877004 kernel: audit: type=1327 audit(1768370660.825:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:20.864000 audit[3309]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:20.889628 kernel: audit: type=1325 audit(1768370660.864:535): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:20.864000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf6181c40 a2=0 a3=0 items=0 ppid=2918 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.913038 kernel: audit: type=1300 audit(1768370660.864:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf6181c40 a2=0 a3=0 items=0 ppid=2918 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.913203 kernel: audit: type=1327 audit(1768370660.864:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:20.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:20.899000 audit: BPF prog-id=156 op=LOAD Jan 14 06:04:20.932181 kernel: audit: type=1334 audit(1768370660.899:536): prog-id=156 op=LOAD Jan 14 06:04:20.937636 kernel: audit: type=1334 audit(1768370660.900:537): prog-id=157 op=LOAD Jan 14 06:04:20.900000 audit: BPF prog-id=157 op=LOAD Jan 14 06:04:20.900000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.949811 containerd[1601]: time="2026-01-14T06:04:20.944753512Z" level=info msg="connecting to shim 67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9" address="unix:///run/containerd/s/86188dd5765bcfcc788d0f13a2602b4794be4ca14d037b2c3985e3972a0fdbec" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:20.958533 kernel: audit: type=1300 audit(1768370660.900:537): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.973439 kubelet[2756]: E0114 06:04:20.973323 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.973439 kubelet[2756]: W0114 06:04:20.973437 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.973674 kubelet[2756]: E0114 06:04:20.973463 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.976532 kubelet[2756]: E0114 06:04:20.976451 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.976792 kubelet[2756]: W0114 06:04:20.976709 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.977747 kubelet[2756]: E0114 06:04:20.977702 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.981093 kubelet[2756]: E0114 06:04:20.980918 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.981540 kubelet[2756]: W0114 06:04:20.981469 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.981691 kubelet[2756]: E0114 06:04:20.981653 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.983350 kubelet[2756]: E0114 06:04:20.983278 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.983350 kubelet[2756]: W0114 06:04:20.983342 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.983654 kernel: audit: type=1327 audit(1768370660.900:537): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.984843 kubelet[2756]: E0114 06:04:20.983524 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.984843 kubelet[2756]: E0114 06:04:20.983929 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.984843 kubelet[2756]: W0114 06:04:20.983940 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.984843 kubelet[2756]: E0114 06:04:20.984269 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.984843 kubelet[2756]: E0114 06:04:20.984545 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.984843 kubelet[2756]: W0114 06:04:20.984555 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.984843 kubelet[2756]: E0114 06:04:20.984807 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.985112 kubelet[2756]: E0114 06:04:20.985094 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.985112 kubelet[2756]: W0114 06:04:20.985105 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.900000 audit: BPF prog-id=157 op=UNLOAD Jan 14 06:04:20.900000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.901000 audit: BPF prog-id=158 op=LOAD Jan 14 06:04:20.901000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.902000 audit: BPF prog-id=159 op=LOAD Jan 14 06:04:20.902000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.902000 audit: BPF prog-id=159 op=UNLOAD Jan 14 06:04:20.902000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.903000 audit: BPF prog-id=158 op=UNLOAD Jan 14 06:04:20.903000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.903000 audit: BPF prog-id=160 op=LOAD Jan 14 06:04:20.903000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:20.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336136333933333961396364623461356533633061343031653665 Jan 14 06:04:20.991465 kubelet[2756]: E0114 06:04:20.985500 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.991465 kubelet[2756]: E0114 06:04:20.985997 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.991465 kubelet[2756]: W0114 06:04:20.986008 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.991465 kubelet[2756]: E0114 06:04:20.986024 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.991465 kubelet[2756]: E0114 06:04:20.986540 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.991465 kubelet[2756]: W0114 06:04:20.986550 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.991465 kubelet[2756]: E0114 06:04:20.986793 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.991465 kubelet[2756]: E0114 06:04:20.987073 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.991465 kubelet[2756]: W0114 06:04:20.987084 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.991465 kubelet[2756]: E0114 06:04:20.987412 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.992062 kubelet[2756]: E0114 06:04:20.987773 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.992062 kubelet[2756]: W0114 06:04:20.987784 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.992062 kubelet[2756]: E0114 06:04:20.987984 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.992062 kubelet[2756]: E0114 06:04:20.988524 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.992062 kubelet[2756]: W0114 06:04:20.988535 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.992062 kubelet[2756]: E0114 06:04:20.988699 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.992062 kubelet[2756]: E0114 06:04:20.989074 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.992062 kubelet[2756]: W0114 06:04:20.989085 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.992062 kubelet[2756]: E0114 06:04:20.989523 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.992062 kubelet[2756]: E0114 06:04:20.990315 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.992490 kubelet[2756]: W0114 06:04:20.990325 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.992490 kubelet[2756]: E0114 06:04:20.990521 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.992490 kubelet[2756]: E0114 06:04:20.992305 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.992490 kubelet[2756]: W0114 06:04:20.992315 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.992490 kubelet[2756]: E0114 06:04:20.992366 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.992867 kubelet[2756]: E0114 06:04:20.992668 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.992867 kubelet[2756]: W0114 06:04:20.992678 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.992867 kubelet[2756]: E0114 06:04:20.992825 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.992995 kubelet[2756]: E0114 06:04:20.992901 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.992995 kubelet[2756]: W0114 06:04:20.992911 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.993214 kubelet[2756]: E0114 06:04:20.993068 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.993505 kubelet[2756]: E0114 06:04:20.993418 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.993505 kubelet[2756]: W0114 06:04:20.993474 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.993743 kubelet[2756]: E0114 06:04:20.993543 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.994115 kubelet[2756]: E0114 06:04:20.994027 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.994115 kubelet[2756]: W0114 06:04:20.994088 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.994419 kubelet[2756]: E0114 06:04:20.994400 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.994977 kubelet[2756]: E0114 06:04:20.994898 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.994977 kubelet[2756]: W0114 06:04:20.994951 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.995730 kubelet[2756]: E0114 06:04:20.995662 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.996072 kubelet[2756]: E0114 06:04:20.995996 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.996072 kubelet[2756]: W0114 06:04:20.996031 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.996072 kubelet[2756]: E0114 06:04:20.996041 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.996496 kubelet[2756]: E0114 06:04:20.996426 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.996496 kubelet[2756]: W0114 06:04:20.996461 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.996700 kubelet[2756]: E0114 06:04:20.996510 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.997801 kubelet[2756]: E0114 06:04:20.997727 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.997801 kubelet[2756]: W0114 06:04:20.997783 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.997990 kubelet[2756]: E0114 06:04:20.997937 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.998361 kubelet[2756]: E0114 06:04:20.998285 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.998361 kubelet[2756]: W0114 06:04:20.998322 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.998700 kubelet[2756]: E0114 06:04:20.998657 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:20.999720 kubelet[2756]: E0114 06:04:20.999289 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:20.999720 kubelet[2756]: W0114 06:04:20.999301 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:20.999720 kubelet[2756]: E0114 06:04:20.999311 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:21.025234 kubelet[2756]: E0114 06:04:21.024545 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:21.025419 kubelet[2756]: W0114 06:04:21.025340 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:21.025419 kubelet[2756]: E0114 06:04:21.025368 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:21.035265 systemd[1]: Started cri-containerd-67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9.scope - libcontainer container 67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9. Jan 14 06:04:21.043903 containerd[1601]: time="2026-01-14T06:04:21.043814678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5864b68dc-lrt75,Uid:56cce3cc-3189-4d5b-a576-f4da7afae3b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"993a639339a9cdb4a5e3c0a401e6e3e6649a906697269795cf576de476844931\"" Jan 14 06:04:21.047265 kubelet[2756]: E0114 06:04:21.047201 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:21.050296 containerd[1601]: time="2026-01-14T06:04:21.050265662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 06:04:21.065000 audit: BPF prog-id=161 op=LOAD Jan 14 06:04:21.066000 audit: BPF prog-id=162 op=LOAD Jan 14 06:04:21.066000 audit[3373]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3358 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:21.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643763623832323762383132626563633938393137353434313962 Jan 14 06:04:21.066000 audit: BPF prog-id=162 op=UNLOAD Jan 14 06:04:21.066000 audit[3373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:21.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643763623832323762383132626563633938393137353434313962 Jan 14 06:04:21.066000 audit: BPF prog-id=163 op=LOAD Jan 14 06:04:21.066000 audit[3373]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3358 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:21.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643763623832323762383132626563633938393137353434313962 Jan 14 06:04:21.066000 audit: BPF prog-id=164 op=LOAD Jan 14 06:04:21.066000 audit[3373]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3358 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:21.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643763623832323762383132626563633938393137353434313962 Jan 14 06:04:21.066000 audit: BPF prog-id=164 op=UNLOAD Jan 14 06:04:21.066000 audit[3373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:21.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643763623832323762383132626563633938393137353434313962 Jan 14 06:04:21.066000 audit: BPF prog-id=163 op=UNLOAD Jan 14 06:04:21.066000 audit[3373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:21.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643763623832323762383132626563633938393137353434313962 Jan 14 06:04:21.066000 audit: BPF prog-id=165 op=LOAD Jan 14 06:04:21.066000 audit[3373]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3358 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:21.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637643763623832323762383132626563633938393137353434313962 Jan 14 06:04:21.121380 containerd[1601]: time="2026-01-14T06:04:21.121318576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tq48t,Uid:11fe8048-e517-4a59-8256-f9c9075d1e74,Namespace:calico-system,Attempt:0,} returns sandbox id \"67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9\"" Jan 14 06:04:21.124477 kubelet[2756]: E0114 06:04:21.124328 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:21.777934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611396354.mount: Deactivated successfully. Jan 14 06:04:22.509442 containerd[1601]: time="2026-01-14T06:04:22.509327256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:22.510707 containerd[1601]: time="2026-01-14T06:04:22.510630884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 06:04:22.512419 containerd[1601]: time="2026-01-14T06:04:22.512289332Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:22.515247 containerd[1601]: time="2026-01-14T06:04:22.515187235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:22.516222 containerd[1601]: time="2026-01-14T06:04:22.516123066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.465746366s" Jan 14 06:04:22.516222 containerd[1601]: time="2026-01-14T06:04:22.516180848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 06:04:22.519661 containerd[1601]: time="2026-01-14T06:04:22.518367580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 06:04:22.534776 containerd[1601]: time="2026-01-14T06:04:22.534679067Z" level=info msg="CreateContainer within sandbox \"993a639339a9cdb4a5e3c0a401e6e3e6649a906697269795cf576de476844931\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 06:04:22.547367 containerd[1601]: time="2026-01-14T06:04:22.547231585Z" level=info msg="Container c2092b5bc6aa24b41361855f6fbaa94b13c17c3b5f37e5a1f19fe3cfc82f5350: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:22.559986 kubelet[2756]: E0114 06:04:22.559772 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:22.563140 containerd[1601]: time="2026-01-14T06:04:22.562994816Z" level=info msg="CreateContainer within sandbox \"993a639339a9cdb4a5e3c0a401e6e3e6649a906697269795cf576de476844931\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c2092b5bc6aa24b41361855f6fbaa94b13c17c3b5f37e5a1f19fe3cfc82f5350\"" Jan 14 06:04:22.563920 containerd[1601]: time="2026-01-14T06:04:22.563890453Z" level=info msg="StartContainer for \"c2092b5bc6aa24b41361855f6fbaa94b13c17c3b5f37e5a1f19fe3cfc82f5350\"" Jan 14 06:04:22.565769 containerd[1601]: time="2026-01-14T06:04:22.565370998Z" level=info msg="connecting to shim c2092b5bc6aa24b41361855f6fbaa94b13c17c3b5f37e5a1f19fe3cfc82f5350" address="unix:///run/containerd/s/8bf897b04872fef58959c0c53e7efe84b405c4f79db8c5c76fbe685e0d17ca86" protocol=ttrpc version=3 Jan 14 06:04:22.598095 systemd[1]: Started cri-containerd-c2092b5bc6aa24b41361855f6fbaa94b13c17c3b5f37e5a1f19fe3cfc82f5350.scope - libcontainer container c2092b5bc6aa24b41361855f6fbaa94b13c17c3b5f37e5a1f19fe3cfc82f5350. Jan 14 06:04:22.635000 audit: BPF prog-id=166 op=LOAD Jan 14 06:04:22.636000 audit: BPF prog-id=167 op=LOAD Jan 14 06:04:22.636000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3272 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332303932623562633661613234623431333631383535663666626161 Jan 14 06:04:22.636000 audit: BPF prog-id=167 op=UNLOAD Jan 14 06:04:22.636000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332303932623562633661613234623431333631383535663666626161 Jan 14 06:04:22.636000 audit: BPF prog-id=168 op=LOAD Jan 14 06:04:22.636000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3272 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332303932623562633661613234623431333631383535663666626161 Jan 14 06:04:22.636000 audit: BPF prog-id=169 op=LOAD Jan 14 06:04:22.636000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3272 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332303932623562633661613234623431333631383535663666626161 Jan 14 06:04:22.636000 audit: BPF prog-id=169 op=UNLOAD Jan 14 06:04:22.636000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332303932623562633661613234623431333631383535663666626161 Jan 14 06:04:22.636000 audit: BPF prog-id=168 op=UNLOAD Jan 14 06:04:22.636000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332303932623562633661613234623431333631383535663666626161 Jan 14 06:04:22.636000 audit: BPF prog-id=170 op=LOAD Jan 14 06:04:22.636000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3272 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332303932623562633661613234623431333631383535663666626161 Jan 14 06:04:22.699361 containerd[1601]: time="2026-01-14T06:04:22.699214005Z" level=info msg="StartContainer for \"c2092b5bc6aa24b41361855f6fbaa94b13c17c3b5f37e5a1f19fe3cfc82f5350\" returns successfully" Jan 14 06:04:22.722491 kubelet[2756]: E0114 06:04:22.722363 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:22.753452 kubelet[2756]: I0114 06:04:22.753145 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5864b68dc-lrt75" podStartSLOduration=1.284180838 podStartE2EDuration="2.753117063s" podCreationTimestamp="2026-01-14 06:04:20 +0000 UTC" firstStartedPulling="2026-01-14 06:04:21.048939263 +0000 UTC m=+23.669623693" lastFinishedPulling="2026-01-14 06:04:22.517875487 +0000 UTC m=+25.138559918" observedRunningTime="2026-01-14 06:04:22.752541596 +0000 UTC m=+25.373226037" watchObservedRunningTime="2026-01-14 06:04:22.753117063 +0000 UTC m=+25.373801504" Jan 14 06:04:22.756202 kubelet[2756]: E0114 06:04:22.755963 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.756731 kubelet[2756]: W0114 06:04:22.756494 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.756731 kubelet[2756]: E0114 06:04:22.756516 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.757771 kubelet[2756]: E0114 06:04:22.757731 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.757771 kubelet[2756]: W0114 06:04:22.757767 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.757856 kubelet[2756]: E0114 06:04:22.757780 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.758245 kubelet[2756]: E0114 06:04:22.758202 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.758287 kubelet[2756]: W0114 06:04:22.758250 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.758287 kubelet[2756]: E0114 06:04:22.758275 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.759060 kubelet[2756]: E0114 06:04:22.759016 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.759060 kubelet[2756]: W0114 06:04:22.759036 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.759060 kubelet[2756]: E0114 06:04:22.759047 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.760692 kubelet[2756]: E0114 06:04:22.759547 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.760692 kubelet[2756]: W0114 06:04:22.759669 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.760692 kubelet[2756]: E0114 06:04:22.759685 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.762238 kubelet[2756]: E0114 06:04:22.760846 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.762238 kubelet[2756]: W0114 06:04:22.760856 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.762238 kubelet[2756]: E0114 06:04:22.760866 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.762238 kubelet[2756]: E0114 06:04:22.761866 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.763170 kubelet[2756]: W0114 06:04:22.762343 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.763170 kubelet[2756]: E0114 06:04:22.762360 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.763833 kubelet[2756]: E0114 06:04:22.763729 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.763931 kubelet[2756]: W0114 06:04:22.763766 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.763931 kubelet[2756]: E0114 06:04:22.763869 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.764706 kubelet[2756]: E0114 06:04:22.764553 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.764756 kubelet[2756]: W0114 06:04:22.764736 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.764756 kubelet[2756]: E0114 06:04:22.764749 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.765133 kubelet[2756]: E0114 06:04:22.765097 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.765182 kubelet[2756]: W0114 06:04:22.765135 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.765182 kubelet[2756]: E0114 06:04:22.765145 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.765812 kubelet[2756]: E0114 06:04:22.765775 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.765812 kubelet[2756]: W0114 06:04:22.765812 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.765878 kubelet[2756]: E0114 06:04:22.765822 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.766759 kubelet[2756]: E0114 06:04:22.766551 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.766861 kubelet[2756]: W0114 06:04:22.766819 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.766950 kubelet[2756]: E0114 06:04:22.766916 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.767489 kubelet[2756]: E0114 06:04:22.767452 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.767489 kubelet[2756]: W0114 06:04:22.767489 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.767549 kubelet[2756]: E0114 06:04:22.767500 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.767940 kubelet[2756]: E0114 06:04:22.767844 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.767940 kubelet[2756]: W0114 06:04:22.767881 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.767940 kubelet[2756]: E0114 06:04:22.767890 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.768491 kubelet[2756]: E0114 06:04:22.768380 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.768491 kubelet[2756]: W0114 06:04:22.768413 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.768491 kubelet[2756]: E0114 06:04:22.768423 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.806833 kubelet[2756]: E0114 06:04:22.806754 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.806833 kubelet[2756]: W0114 06:04:22.806805 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.806833 kubelet[2756]: E0114 06:04:22.806828 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.807143 kubelet[2756]: E0114 06:04:22.807116 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.807367 kubelet[2756]: W0114 06:04:22.807290 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.807420 kubelet[2756]: E0114 06:04:22.807398 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.807984 kubelet[2756]: E0114 06:04:22.807942 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.807984 kubelet[2756]: W0114 06:04:22.807976 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.808061 kubelet[2756]: E0114 06:04:22.808018 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.808482 kubelet[2756]: E0114 06:04:22.808418 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.808482 kubelet[2756]: W0114 06:04:22.808457 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.808541 kubelet[2756]: E0114 06:04:22.808504 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.809301 kubelet[2756]: E0114 06:04:22.809158 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.809301 kubelet[2756]: W0114 06:04:22.809200 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.810739 kubelet[2756]: E0114 06:04:22.810705 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.812189 kubelet[2756]: E0114 06:04:22.811545 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.812329 kubelet[2756]: W0114 06:04:22.812306 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.812666 kubelet[2756]: E0114 06:04:22.812487 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.813098 kubelet[2756]: E0114 06:04:22.813058 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.813098 kubelet[2756]: W0114 06:04:22.813093 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.813273 kubelet[2756]: E0114 06:04:22.813166 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.813491 kubelet[2756]: E0114 06:04:22.813348 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.813491 kubelet[2756]: W0114 06:04:22.813361 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.813703 kubelet[2756]: E0114 06:04:22.813558 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.814501 kubelet[2756]: E0114 06:04:22.814456 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.814501 kubelet[2756]: W0114 06:04:22.814493 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.814813 kubelet[2756]: E0114 06:04:22.814512 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.815114 kubelet[2756]: E0114 06:04:22.815064 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.815114 kubelet[2756]: W0114 06:04:22.815082 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.815546 kubelet[2756]: E0114 06:04:22.815261 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.815546 kubelet[2756]: E0114 06:04:22.815275 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.815546 kubelet[2756]: W0114 06:04:22.815284 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.815546 kubelet[2756]: E0114 06:04:22.815386 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.815970 kubelet[2756]: E0114 06:04:22.815925 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.815970 kubelet[2756]: W0114 06:04:22.815964 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.816447 kubelet[2756]: E0114 06:04:22.816074 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.816447 kubelet[2756]: E0114 06:04:22.816218 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.816447 kubelet[2756]: W0114 06:04:22.816227 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.816447 kubelet[2756]: E0114 06:04:22.816241 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.817242 kubelet[2756]: E0114 06:04:22.817160 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.817242 kubelet[2756]: W0114 06:04:22.817202 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.817433 kubelet[2756]: E0114 06:04:22.817263 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.817956 kubelet[2756]: E0114 06:04:22.817717 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.817956 kubelet[2756]: W0114 06:04:22.817756 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.817956 kubelet[2756]: E0114 06:04:22.817856 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.818183 kubelet[2756]: E0114 06:04:22.818166 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.818394 kubelet[2756]: W0114 06:04:22.818238 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.818394 kubelet[2756]: E0114 06:04:22.818308 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.819013 kubelet[2756]: E0114 06:04:22.818958 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.819013 kubelet[2756]: W0114 06:04:22.818992 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.819013 kubelet[2756]: E0114 06:04:22.819002 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:22.819757 kubelet[2756]: E0114 06:04:22.819718 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:04:22.819757 kubelet[2756]: W0114 06:04:22.819749 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:04:22.819757 kubelet[2756]: E0114 06:04:22.819759 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:04:23.508302 containerd[1601]: time="2026-01-14T06:04:23.507085508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:23.511884 containerd[1601]: time="2026-01-14T06:04:23.508620897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:23.511884 containerd[1601]: time="2026-01-14T06:04:23.510162518Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:23.513712 containerd[1601]: time="2026-01-14T06:04:23.513641002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:23.514495 containerd[1601]: time="2026-01-14T06:04:23.514355603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 995.955434ms" Jan 14 06:04:23.514495 containerd[1601]: time="2026-01-14T06:04:23.514420651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 06:04:23.518363 containerd[1601]: time="2026-01-14T06:04:23.518238667Z" level=info msg="CreateContainer within sandbox \"67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 06:04:23.539504 containerd[1601]: time="2026-01-14T06:04:23.539463324Z" level=info msg="Container 93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:23.556445 containerd[1601]: time="2026-01-14T06:04:23.556323750Z" level=info msg="CreateContainer within sandbox \"67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde\"" Jan 14 06:04:23.557081 containerd[1601]: time="2026-01-14T06:04:23.557019121Z" level=info msg="StartContainer for \"93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde\"" Jan 14 06:04:23.558668 containerd[1601]: time="2026-01-14T06:04:23.558533324Z" level=info msg="connecting to shim 93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde" address="unix:///run/containerd/s/86188dd5765bcfcc788d0f13a2602b4794be4ca14d037b2c3985e3972a0fdbec" protocol=ttrpc version=3 Jan 14 06:04:23.630721 systemd[1]: Started cri-containerd-93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde.scope - libcontainer container 93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde. Jan 14 06:04:23.720000 audit: BPF prog-id=171 op=LOAD Jan 14 06:04:23.720000 audit[3518]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3358 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:23.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626261656363653461613837303738613132636331306565356563 Jan 14 06:04:23.720000 audit: BPF prog-id=172 op=LOAD Jan 14 06:04:23.720000 audit[3518]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3358 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:23.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626261656363653461613837303738613132636331306565356563 Jan 14 06:04:23.720000 audit: BPF prog-id=172 op=UNLOAD Jan 14 06:04:23.720000 audit[3518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:23.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626261656363653461613837303738613132636331306565356563 Jan 14 06:04:23.720000 audit: BPF prog-id=171 op=UNLOAD Jan 14 06:04:23.720000 audit[3518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:23.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626261656363653461613837303738613132636331306565356563 Jan 14 06:04:23.720000 audit: BPF prog-id=173 op=LOAD Jan 14 06:04:23.720000 audit[3518]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3358 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:23.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626261656363653461613837303738613132636331306565356563 Jan 14 06:04:23.729558 kubelet[2756]: I0114 06:04:23.729123 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:04:23.729558 kubelet[2756]: E0114 06:04:23.729533 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:23.754085 containerd[1601]: time="2026-01-14T06:04:23.753971936Z" level=info msg="StartContainer for \"93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde\" returns successfully" Jan 14 06:04:23.761740 systemd[1]: cri-containerd-93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde.scope: Deactivated successfully. Jan 14 06:04:23.764935 containerd[1601]: time="2026-01-14T06:04:23.764809079Z" level=info msg="received container exit event container_id:\"93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde\" id:\"93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde\" pid:3530 exited_at:{seconds:1768370663 nanos:764404772}" Jan 14 06:04:23.767000 audit: BPF prog-id=173 op=UNLOAD Jan 14 06:04:23.802259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93bbaecce4aa87078a12cc10ee5ecc9e0c4d4e64d104fe9336dfd07a0437abde-rootfs.mount: Deactivated successfully. Jan 14 06:04:24.558206 kubelet[2756]: E0114 06:04:24.557947 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:24.736304 kubelet[2756]: E0114 06:04:24.735884 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:24.737385 containerd[1601]: time="2026-01-14T06:04:24.737345889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 06:04:25.623981 kubelet[2756]: I0114 06:04:25.623891 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:04:25.624962 kubelet[2756]: E0114 06:04:25.624893 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:25.693000 audit[3574]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:25.693000 audit[3574]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbf9412e0 a2=0 a3=7fffbf9412cc items=0 ppid=2918 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:25.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:25.700000 audit[3574]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:25.700000 audit[3574]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffbf9412e0 a2=0 a3=7fffbf9412cc items=0 ppid=2918 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:25.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:25.740237 kubelet[2756]: E0114 06:04:25.740121 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:26.557952 kubelet[2756]: E0114 06:04:26.557837 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:27.016538 containerd[1601]: time="2026-01-14T06:04:27.016409179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:27.017526 containerd[1601]: time="2026-01-14T06:04:27.017445526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 06:04:27.019117 containerd[1601]: time="2026-01-14T06:04:27.019024655Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:27.021834 containerd[1601]: time="2026-01-14T06:04:27.021404108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:27.022025 containerd[1601]: time="2026-01-14T06:04:27.021932545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.284542874s" Jan 14 06:04:27.022025 containerd[1601]: time="2026-01-14T06:04:27.021993253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 06:04:27.025477 containerd[1601]: time="2026-01-14T06:04:27.025396466Z" level=info msg="CreateContainer within sandbox \"67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 06:04:27.039659 containerd[1601]: time="2026-01-14T06:04:27.038956734Z" level=info msg="Container b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:27.051200 containerd[1601]: time="2026-01-14T06:04:27.051074510Z" level=info msg="CreateContainer within sandbox \"67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd\"" Jan 14 06:04:27.052269 containerd[1601]: time="2026-01-14T06:04:27.052150375Z" level=info msg="StartContainer for \"b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd\"" Jan 14 06:04:27.054462 containerd[1601]: time="2026-01-14T06:04:27.054338760Z" level=info msg="connecting to shim b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd" address="unix:///run/containerd/s/86188dd5765bcfcc788d0f13a2602b4794be4ca14d037b2c3985e3972a0fdbec" protocol=ttrpc version=3 Jan 14 06:04:27.106889 systemd[1]: Started cri-containerd-b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd.scope - libcontainer container b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd. Jan 14 06:04:27.216760 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 14 06:04:27.216872 kernel: audit: type=1334 audit(1768370667.212:568): prog-id=174 op=LOAD Jan 14 06:04:27.212000 audit: BPF prog-id=174 op=LOAD Jan 14 06:04:27.212000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.229069 kernel: audit: type=1300 audit(1768370667.212:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.240909 kernel: audit: type=1327 audit(1768370667.212:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.241245 kernel: audit: type=1334 audit(1768370667.212:569): prog-id=175 op=LOAD Jan 14 06:04:27.212000 audit: BPF prog-id=175 op=LOAD Jan 14 06:04:27.243856 kernel: audit: type=1300 audit(1768370667.212:569): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.212000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.254229 kernel: audit: type=1327 audit(1768370667.212:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.264525 kernel: audit: type=1334 audit(1768370667.212:570): prog-id=175 op=UNLOAD Jan 14 06:04:27.212000 audit: BPF prog-id=175 op=UNLOAD Jan 14 06:04:27.277512 kernel: audit: type=1300 audit(1768370667.212:570): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.212000 audit[3579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.292311 containerd[1601]: time="2026-01-14T06:04:27.292070392Z" level=info msg="StartContainer for \"b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd\" returns successfully" Jan 14 06:04:27.292705 kernel: audit: type=1327 audit(1768370667.212:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.212000 audit: BPF prog-id=174 op=UNLOAD Jan 14 06:04:27.212000 audit[3579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.212000 audit: BPF prog-id=176 op=LOAD Jan 14 06:04:27.212000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3358 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:27.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326331393263663031666235386261356336633432313766626439 Jan 14 06:04:27.298667 kernel: audit: type=1334 audit(1768370667.212:571): prog-id=174 op=UNLOAD Jan 14 06:04:27.750953 kubelet[2756]: E0114 06:04:27.750724 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:28.289364 systemd[1]: cri-containerd-b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd.scope: Deactivated successfully. Jan 14 06:04:28.290089 systemd[1]: cri-containerd-b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd.scope: Consumed 1.245s CPU time, 172.8M memory peak, 3M read from disk, 171.3M written to disk. Jan 14 06:04:28.293792 containerd[1601]: time="2026-01-14T06:04:28.293695490Z" level=info msg="received container exit event container_id:\"b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd\" id:\"b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd\" pid:3591 exited_at:{seconds:1768370668 nanos:293382493}" Jan 14 06:04:28.294000 audit: BPF prog-id=176 op=UNLOAD Jan 14 06:04:28.344148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b12c192cf01fb58ba5c6c4217fbd956ae4fc4d904f200a641767ff8d108a17fd-rootfs.mount: Deactivated successfully. Jan 14 06:04:28.360187 kubelet[2756]: I0114 06:04:28.360136 2756 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 06:04:28.435216 kubelet[2756]: I0114 06:04:28.435172 2756 status_manager.go:890] "Failed to get status for pod" podUID="87e21363-a0de-43a7-93ef-08f8312a793d" pod="calico-system/whisker-6bc54ddf7b-nhlfd" err="pods \"whisker-6bc54ddf7b-nhlfd\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Jan 14 06:04:28.449349 systemd[1]: Created slice kubepods-besteffort-pod87e21363_a0de_43a7_93ef_08f8312a793d.slice - libcontainer container kubepods-besteffort-pod87e21363_a0de_43a7_93ef_08f8312a793d.slice. Jan 14 06:04:28.471101 systemd[1]: Created slice kubepods-burstable-pod8c744601_7d6b_423f_ba88_6bef4a7dd5ae.slice - libcontainer container kubepods-burstable-pod8c744601_7d6b_423f_ba88_6bef4a7dd5ae.slice. Jan 14 06:04:28.482732 systemd[1]: Created slice kubepods-besteffort-pod3724f055_c35a_48ef_a153_ecc79aaf3801.slice - libcontainer container kubepods-besteffort-pod3724f055_c35a_48ef_a153_ecc79aaf3801.slice. Jan 14 06:04:28.495140 systemd[1]: Created slice kubepods-besteffort-pod0e777262_7a52_479a_bfac_2fd2fb722412.slice - libcontainer container kubepods-besteffort-pod0e777262_7a52_479a_bfac_2fd2fb722412.slice. Jan 14 06:04:28.506266 systemd[1]: Created slice kubepods-besteffort-podc77379be_a206_411d_9fc6_5a9725c3295c.slice - libcontainer container kubepods-besteffort-podc77379be_a206_411d_9fc6_5a9725c3295c.slice. Jan 14 06:04:28.513365 systemd[1]: Created slice kubepods-burstable-podd4551c19_eae2_49ff_b34e_8730e920f6f5.slice - libcontainer container kubepods-burstable-podd4551c19_eae2_49ff_b34e_8730e920f6f5.slice. Jan 14 06:04:28.523699 systemd[1]: Created slice kubepods-besteffort-podc28818ff_c451_40e2_8223_e6f03d8b8188.slice - libcontainer container kubepods-besteffort-podc28818ff_c451_40e2_8223_e6f03d8b8188.slice. Jan 14 06:04:28.564259 kubelet[2756]: I0114 06:04:28.563476 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c5fz\" (UniqueName: \"kubernetes.io/projected/87e21363-a0de-43a7-93ef-08f8312a793d-kube-api-access-4c5fz\") pod \"whisker-6bc54ddf7b-nhlfd\" (UID: \"87e21363-a0de-43a7-93ef-08f8312a793d\") " pod="calico-system/whisker-6bc54ddf7b-nhlfd" Jan 14 06:04:28.564259 kubelet[2756]: I0114 06:04:28.563508 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c77379be-a206-411d-9fc6-5a9725c3295c-tigera-ca-bundle\") pod \"calico-kube-controllers-997b9f787-7wfms\" (UID: \"c77379be-a206-411d-9fc6-5a9725c3295c\") " pod="calico-system/calico-kube-controllers-997b9f787-7wfms" Jan 14 06:04:28.564259 kubelet[2756]: I0114 06:04:28.563528 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-backend-key-pair\") pod \"whisker-6bc54ddf7b-nhlfd\" (UID: \"87e21363-a0de-43a7-93ef-08f8312a793d\") " pod="calico-system/whisker-6bc54ddf7b-nhlfd" Jan 14 06:04:28.564259 kubelet[2756]: I0114 06:04:28.563542 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e777262-7a52-479a-bfac-2fd2fb722412-goldmane-ca-bundle\") pod \"goldmane-666569f655-gbcxx\" (UID: \"0e777262-7a52-479a-bfac-2fd2fb722412\") " pod="calico-system/goldmane-666569f655-gbcxx" Jan 14 06:04:28.564259 kubelet[2756]: I0114 06:04:28.563556 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-ca-bundle\") pod \"whisker-6bc54ddf7b-nhlfd\" (UID: \"87e21363-a0de-43a7-93ef-08f8312a793d\") " pod="calico-system/whisker-6bc54ddf7b-nhlfd" Jan 14 06:04:28.564495 kubelet[2756]: I0114 06:04:28.563642 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c744601-7d6b-423f-ba88-6bef4a7dd5ae-config-volume\") pod \"coredns-668d6bf9bc-m6h4z\" (UID: \"8c744601-7d6b-423f-ba88-6bef4a7dd5ae\") " pod="kube-system/coredns-668d6bf9bc-m6h4z" Jan 14 06:04:28.564495 kubelet[2756]: I0114 06:04:28.563660 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jkn\" (UniqueName: \"kubernetes.io/projected/8c744601-7d6b-423f-ba88-6bef4a7dd5ae-kube-api-access-s4jkn\") pod \"coredns-668d6bf9bc-m6h4z\" (UID: \"8c744601-7d6b-423f-ba88-6bef4a7dd5ae\") " pod="kube-system/coredns-668d6bf9bc-m6h4z" Jan 14 06:04:28.564495 kubelet[2756]: I0114 06:04:28.563675 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r649q\" (UniqueName: \"kubernetes.io/projected/3724f055-c35a-48ef-a153-ecc79aaf3801-kube-api-access-r649q\") pod \"calico-apiserver-5bfb4c95c8-lft2v\" (UID: \"3724f055-c35a-48ef-a153-ecc79aaf3801\") " pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" Jan 14 06:04:28.564495 kubelet[2756]: I0114 06:04:28.563689 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e777262-7a52-479a-bfac-2fd2fb722412-config\") pod \"goldmane-666569f655-gbcxx\" (UID: \"0e777262-7a52-479a-bfac-2fd2fb722412\") " pod="calico-system/goldmane-666569f655-gbcxx" Jan 14 06:04:28.564495 kubelet[2756]: I0114 06:04:28.563706 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblzq\" (UniqueName: \"kubernetes.io/projected/0e777262-7a52-479a-bfac-2fd2fb722412-kube-api-access-wblzq\") pod \"goldmane-666569f655-gbcxx\" (UID: \"0e777262-7a52-479a-bfac-2fd2fb722412\") " pod="calico-system/goldmane-666569f655-gbcxx" Jan 14 06:04:28.564764 kubelet[2756]: I0114 06:04:28.563734 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4551c19-eae2-49ff-b34e-8730e920f6f5-config-volume\") pod \"coredns-668d6bf9bc-9tzhc\" (UID: \"d4551c19-eae2-49ff-b34e-8730e920f6f5\") " pod="kube-system/coredns-668d6bf9bc-9tzhc" Jan 14 06:04:28.564764 kubelet[2756]: I0114 06:04:28.563757 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0e777262-7a52-479a-bfac-2fd2fb722412-goldmane-key-pair\") pod \"goldmane-666569f655-gbcxx\" (UID: \"0e777262-7a52-479a-bfac-2fd2fb722412\") " pod="calico-system/goldmane-666569f655-gbcxx" Jan 14 06:04:28.564764 kubelet[2756]: I0114 06:04:28.563778 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg59x\" (UniqueName: \"kubernetes.io/projected/c77379be-a206-411d-9fc6-5a9725c3295c-kube-api-access-qg59x\") pod \"calico-kube-controllers-997b9f787-7wfms\" (UID: \"c77379be-a206-411d-9fc6-5a9725c3295c\") " pod="calico-system/calico-kube-controllers-997b9f787-7wfms" Jan 14 06:04:28.564764 kubelet[2756]: I0114 06:04:28.563818 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj2rn\" (UniqueName: \"kubernetes.io/projected/c28818ff-c451-40e2-8223-e6f03d8b8188-kube-api-access-dj2rn\") pod \"calico-apiserver-5bfb4c95c8-29zzb\" (UID: \"c28818ff-c451-40e2-8223-e6f03d8b8188\") " pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" Jan 14 06:04:28.564764 kubelet[2756]: I0114 06:04:28.563862 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c28818ff-c451-40e2-8223-e6f03d8b8188-calico-apiserver-certs\") pod \"calico-apiserver-5bfb4c95c8-29zzb\" (UID: \"c28818ff-c451-40e2-8223-e6f03d8b8188\") " pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" Jan 14 06:04:28.564950 kubelet[2756]: I0114 06:04:28.564102 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3724f055-c35a-48ef-a153-ecc79aaf3801-calico-apiserver-certs\") pod \"calico-apiserver-5bfb4c95c8-lft2v\" (UID: \"3724f055-c35a-48ef-a153-ecc79aaf3801\") " pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" Jan 14 06:04:28.564950 kubelet[2756]: I0114 06:04:28.564125 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9z2w\" (UniqueName: \"kubernetes.io/projected/d4551c19-eae2-49ff-b34e-8730e920f6f5-kube-api-access-z9z2w\") pod \"coredns-668d6bf9bc-9tzhc\" (UID: \"d4551c19-eae2-49ff-b34e-8730e920f6f5\") " pod="kube-system/coredns-668d6bf9bc-9tzhc" Jan 14 06:04:28.567776 systemd[1]: Created slice kubepods-besteffort-pod1fd1f2cb_320b_495b_b1a9_bd981c71562f.slice - libcontainer container kubepods-besteffort-pod1fd1f2cb_320b_495b_b1a9_bd981c71562f.slice. Jan 14 06:04:28.573274 containerd[1601]: time="2026-01-14T06:04:28.572830175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgth2,Uid:1fd1f2cb-320b-495b-b1a9-bd981c71562f,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:28.756792 containerd[1601]: time="2026-01-14T06:04:28.756705860Z" level=error msg="Failed to destroy network for sandbox \"13f5838f59434fff5c75743e606ba679b8ef11fa30580b59e10b59c458c04ce8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.758766 containerd[1601]: time="2026-01-14T06:04:28.758558035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc54ddf7b-nhlfd,Uid:87e21363-a0de-43a7-93ef-08f8312a793d,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:28.759835 kubelet[2756]: E0114 06:04:28.759759 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:28.761264 containerd[1601]: time="2026-01-14T06:04:28.761183373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 06:04:28.763775 containerd[1601]: time="2026-01-14T06:04:28.763392408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgth2,Uid:1fd1f2cb-320b-495b-b1a9-bd981c71562f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f5838f59434fff5c75743e606ba679b8ef11fa30580b59e10b59c458c04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.764971 kubelet[2756]: E0114 06:04:28.764316 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f5838f59434fff5c75743e606ba679b8ef11fa30580b59e10b59c458c04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.764971 kubelet[2756]: E0114 06:04:28.764441 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f5838f59434fff5c75743e606ba679b8ef11fa30580b59e10b59c458c04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgth2" Jan 14 06:04:28.764971 kubelet[2756]: E0114 06:04:28.764460 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f5838f59434fff5c75743e606ba679b8ef11fa30580b59e10b59c458c04ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgth2" Jan 14 06:04:28.765122 kubelet[2756]: E0114 06:04:28.764491 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13f5838f59434fff5c75743e606ba679b8ef11fa30580b59e10b59c458c04ce8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:28.776293 kubelet[2756]: E0114 06:04:28.776258 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:28.776867 containerd[1601]: time="2026-01-14T06:04:28.776736758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6h4z,Uid:8c744601-7d6b-423f-ba88-6bef4a7dd5ae,Namespace:kube-system,Attempt:0,}" Jan 14 06:04:28.789121 containerd[1601]: time="2026-01-14T06:04:28.788857799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-lft2v,Uid:3724f055-c35a-48ef-a153-ecc79aaf3801,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:04:28.801959 containerd[1601]: time="2026-01-14T06:04:28.801928921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gbcxx,Uid:0e777262-7a52-479a-bfac-2fd2fb722412,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:28.818320 kubelet[2756]: E0114 06:04:28.817828 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:28.821960 containerd[1601]: time="2026-01-14T06:04:28.821867597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tzhc,Uid:d4551c19-eae2-49ff-b34e-8730e920f6f5,Namespace:kube-system,Attempt:0,}" Jan 14 06:04:28.822670 containerd[1601]: time="2026-01-14T06:04:28.822647379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-997b9f787-7wfms,Uid:c77379be-a206-411d-9fc6-5a9725c3295c,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:28.828278 containerd[1601]: time="2026-01-14T06:04:28.828227815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-29zzb,Uid:c28818ff-c451-40e2-8223-e6f03d8b8188,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:04:28.937678 containerd[1601]: time="2026-01-14T06:04:28.937361980Z" level=error msg="Failed to destroy network for sandbox \"deb478258b3e889661f8c082c6f44073bc8d9acebb755d5ed75740cb77e57bb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.944248 containerd[1601]: time="2026-01-14T06:04:28.944208469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc54ddf7b-nhlfd,Uid:87e21363-a0de-43a7-93ef-08f8312a793d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb478258b3e889661f8c082c6f44073bc8d9acebb755d5ed75740cb77e57bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.945394 kubelet[2756]: E0114 06:04:28.945278 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb478258b3e889661f8c082c6f44073bc8d9acebb755d5ed75740cb77e57bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.945394 kubelet[2756]: E0114 06:04:28.945341 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb478258b3e889661f8c082c6f44073bc8d9acebb755d5ed75740cb77e57bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bc54ddf7b-nhlfd" Jan 14 06:04:28.945394 kubelet[2756]: E0114 06:04:28.945365 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb478258b3e889661f8c082c6f44073bc8d9acebb755d5ed75740cb77e57bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bc54ddf7b-nhlfd" Jan 14 06:04:28.946312 kubelet[2756]: E0114 06:04:28.946275 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bc54ddf7b-nhlfd_calico-system(87e21363-a0de-43a7-93ef-08f8312a793d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bc54ddf7b-nhlfd_calico-system(87e21363-a0de-43a7-93ef-08f8312a793d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"deb478258b3e889661f8c082c6f44073bc8d9acebb755d5ed75740cb77e57bb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bc54ddf7b-nhlfd" podUID="87e21363-a0de-43a7-93ef-08f8312a793d" Jan 14 06:04:28.966432 containerd[1601]: time="2026-01-14T06:04:28.966323897Z" level=error msg="Failed to destroy network for sandbox \"e988e3fee3f8a8e2328348ac06683580106be6de30c8540f9911891bab7f5477\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.974687 containerd[1601]: time="2026-01-14T06:04:28.973912038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-lft2v,Uid:3724f055-c35a-48ef-a153-ecc79aaf3801,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e988e3fee3f8a8e2328348ac06683580106be6de30c8540f9911891bab7f5477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.974925 kubelet[2756]: E0114 06:04:28.974234 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e988e3fee3f8a8e2328348ac06683580106be6de30c8540f9911891bab7f5477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:28.974925 kubelet[2756]: E0114 06:04:28.974287 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e988e3fee3f8a8e2328348ac06683580106be6de30c8540f9911891bab7f5477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" Jan 14 06:04:28.974925 kubelet[2756]: E0114 06:04:28.974307 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e988e3fee3f8a8e2328348ac06683580106be6de30c8540f9911891bab7f5477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" Jan 14 06:04:28.975019 kubelet[2756]: E0114 06:04:28.974346 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bfb4c95c8-lft2v_calico-apiserver(3724f055-c35a-48ef-a153-ecc79aaf3801)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bfb4c95c8-lft2v_calico-apiserver(3724f055-c35a-48ef-a153-ecc79aaf3801)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e988e3fee3f8a8e2328348ac06683580106be6de30c8540f9911891bab7f5477\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:04:29.003507 containerd[1601]: time="2026-01-14T06:04:29.003356980Z" level=error msg="Failed to destroy network for sandbox \"e48c3c8dbec783c0aaca551e29312207b8e92f7c56d6e52f7efc339433f0529a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.015951 containerd[1601]: time="2026-01-14T06:04:29.015743612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gbcxx,Uid:0e777262-7a52-479a-bfac-2fd2fb722412,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48c3c8dbec783c0aaca551e29312207b8e92f7c56d6e52f7efc339433f0529a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.016283 kubelet[2756]: E0114 06:04:29.016246 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48c3c8dbec783c0aaca551e29312207b8e92f7c56d6e52f7efc339433f0529a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.016759 kubelet[2756]: E0114 06:04:29.016391 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48c3c8dbec783c0aaca551e29312207b8e92f7c56d6e52f7efc339433f0529a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gbcxx" Jan 14 06:04:29.016759 kubelet[2756]: E0114 06:04:29.016416 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48c3c8dbec783c0aaca551e29312207b8e92f7c56d6e52f7efc339433f0529a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gbcxx" Jan 14 06:04:29.016759 kubelet[2756]: E0114 06:04:29.016453 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gbcxx_calico-system(0e777262-7a52-479a-bfac-2fd2fb722412)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gbcxx_calico-system(0e777262-7a52-479a-bfac-2fd2fb722412)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e48c3c8dbec783c0aaca551e29312207b8e92f7c56d6e52f7efc339433f0529a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:04:29.040648 containerd[1601]: time="2026-01-14T06:04:29.040465526Z" level=error msg="Failed to destroy network for sandbox \"816e8dd3ac4e3852f940ac9e3be5f3cfbb21168596c938fedcf340f4720e118b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.049974 containerd[1601]: time="2026-01-14T06:04:29.049940536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6h4z,Uid:8c744601-7d6b-423f-ba88-6bef4a7dd5ae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"816e8dd3ac4e3852f940ac9e3be5f3cfbb21168596c938fedcf340f4720e118b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.050493 kubelet[2756]: E0114 06:04:29.050430 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"816e8dd3ac4e3852f940ac9e3be5f3cfbb21168596c938fedcf340f4720e118b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.050677 kubelet[2756]: E0114 06:04:29.050510 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"816e8dd3ac4e3852f940ac9e3be5f3cfbb21168596c938fedcf340f4720e118b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m6h4z" Jan 14 06:04:29.050677 kubelet[2756]: E0114 06:04:29.050537 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"816e8dd3ac4e3852f940ac9e3be5f3cfbb21168596c938fedcf340f4720e118b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m6h4z" Jan 14 06:04:29.050744 kubelet[2756]: E0114 06:04:29.050679 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m6h4z_kube-system(8c744601-7d6b-423f-ba88-6bef4a7dd5ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m6h4z_kube-system(8c744601-7d6b-423f-ba88-6bef4a7dd5ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"816e8dd3ac4e3852f940ac9e3be5f3cfbb21168596c938fedcf340f4720e118b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m6h4z" podUID="8c744601-7d6b-423f-ba88-6bef4a7dd5ae" Jan 14 06:04:29.052116 containerd[1601]: time="2026-01-14T06:04:29.051955104Z" level=error msg="Failed to destroy network for sandbox \"337d71bbade265bd0eebf100ff768b029c61ec8331704e66f6f240f5eaa16d0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.056673 containerd[1601]: time="2026-01-14T06:04:29.056217124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tzhc,Uid:d4551c19-eae2-49ff-b34e-8730e920f6f5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d71bbade265bd0eebf100ff768b029c61ec8331704e66f6f240f5eaa16d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.056974 kubelet[2756]: E0114 06:04:29.056496 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d71bbade265bd0eebf100ff768b029c61ec8331704e66f6f240f5eaa16d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.056974 kubelet[2756]: E0114 06:04:29.056532 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d71bbade265bd0eebf100ff768b029c61ec8331704e66f6f240f5eaa16d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9tzhc" Jan 14 06:04:29.056974 kubelet[2756]: E0114 06:04:29.056550 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d71bbade265bd0eebf100ff768b029c61ec8331704e66f6f240f5eaa16d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9tzhc" Jan 14 06:04:29.057073 kubelet[2756]: E0114 06:04:29.056677 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9tzhc_kube-system(d4551c19-eae2-49ff-b34e-8730e920f6f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9tzhc_kube-system(d4551c19-eae2-49ff-b34e-8730e920f6f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"337d71bbade265bd0eebf100ff768b029c61ec8331704e66f6f240f5eaa16d0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9tzhc" podUID="d4551c19-eae2-49ff-b34e-8730e920f6f5" Jan 14 06:04:29.076188 containerd[1601]: time="2026-01-14T06:04:29.075849963Z" level=error msg="Failed to destroy network for sandbox \"3f69027469ac1870805d50139c8d9512297693b01de7983602043748738ab3c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.084110 containerd[1601]: time="2026-01-14T06:04:29.083944901Z" level=error msg="Failed to destroy network for sandbox \"a0aa68bb28cd399b544c8479d20792f96c61c4889edd1cde3e893a633ac0f1e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.090493 containerd[1601]: time="2026-01-14T06:04:29.090403537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-29zzb,Uid:c28818ff-c451-40e2-8223-e6f03d8b8188,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0aa68bb28cd399b544c8479d20792f96c61c4889edd1cde3e893a633ac0f1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.090893 kubelet[2756]: E0114 06:04:29.090819 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0aa68bb28cd399b544c8479d20792f96c61c4889edd1cde3e893a633ac0f1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.090947 kubelet[2756]: E0114 06:04:29.090894 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0aa68bb28cd399b544c8479d20792f96c61c4889edd1cde3e893a633ac0f1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" Jan 14 06:04:29.090947 kubelet[2756]: E0114 06:04:29.090914 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0aa68bb28cd399b544c8479d20792f96c61c4889edd1cde3e893a633ac0f1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" Jan 14 06:04:29.091006 kubelet[2756]: E0114 06:04:29.090947 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bfb4c95c8-29zzb_calico-apiserver(c28818ff-c451-40e2-8223-e6f03d8b8188)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bfb4c95c8-29zzb_calico-apiserver(c28818ff-c451-40e2-8223-e6f03d8b8188)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0aa68bb28cd399b544c8479d20792f96c61c4889edd1cde3e893a633ac0f1e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:04:29.091828 containerd[1601]: time="2026-01-14T06:04:29.091729536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-997b9f787-7wfms,Uid:c77379be-a206-411d-9fc6-5a9725c3295c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f69027469ac1870805d50139c8d9512297693b01de7983602043748738ab3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.092018 kubelet[2756]: E0114 06:04:29.091940 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f69027469ac1870805d50139c8d9512297693b01de7983602043748738ab3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:04:29.092018 kubelet[2756]: E0114 06:04:29.091971 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f69027469ac1870805d50139c8d9512297693b01de7983602043748738ab3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" Jan 14 06:04:29.092018 kubelet[2756]: E0114 06:04:29.091985 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f69027469ac1870805d50139c8d9512297693b01de7983602043748738ab3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" Jan 14 06:04:29.092108 kubelet[2756]: E0114 06:04:29.092015 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-997b9f787-7wfms_calico-system(c77379be-a206-411d-9fc6-5a9725c3295c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-997b9f787-7wfms_calico-system(c77379be-a206-411d-9fc6-5a9725c3295c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f69027469ac1870805d50139c8d9512297693b01de7983602043748738ab3c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:04:29.353494 systemd[1]: run-netns-cni\x2da6a8ae96\x2dff92\x2d0d15\x2d9847\x2d46217abaa962.mount: Deactivated successfully. Jan 14 06:04:35.680441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482694976.mount: Deactivated successfully. Jan 14 06:04:35.747649 containerd[1601]: time="2026-01-14T06:04:35.725005375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 06:04:35.747649 containerd[1601]: time="2026-01-14T06:04:35.730152324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.968807887s" Jan 14 06:04:35.747649 containerd[1601]: time="2026-01-14T06:04:35.747623283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 06:04:35.748321 containerd[1601]: time="2026-01-14T06:04:35.737036566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:35.748877 containerd[1601]: time="2026-01-14T06:04:35.748768417Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:35.749646 containerd[1601]: time="2026-01-14T06:04:35.749454933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:04:35.763888 containerd[1601]: time="2026-01-14T06:04:35.763737468Z" level=info msg="CreateContainer within sandbox \"67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 06:04:35.795986 containerd[1601]: time="2026-01-14T06:04:35.795855798Z" level=info msg="Container 626b6bb2462ae2824129a74b71a25df3eda4b31738a490f5a5df7a056516426e: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:35.816643 containerd[1601]: time="2026-01-14T06:04:35.816459720Z" level=info msg="CreateContainer within sandbox \"67d7cb8227b812becc9891754419bacf0fbc86f52768af8463e9787b06f39fb9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"626b6bb2462ae2824129a74b71a25df3eda4b31738a490f5a5df7a056516426e\"" Jan 14 06:04:35.818280 containerd[1601]: time="2026-01-14T06:04:35.818042542Z" level=info msg="StartContainer for \"626b6bb2462ae2824129a74b71a25df3eda4b31738a490f5a5df7a056516426e\"" Jan 14 06:04:35.820353 containerd[1601]: time="2026-01-14T06:04:35.820254788Z" level=info msg="connecting to shim 626b6bb2462ae2824129a74b71a25df3eda4b31738a490f5a5df7a056516426e" address="unix:///run/containerd/s/86188dd5765bcfcc788d0f13a2602b4794be4ca14d037b2c3985e3972a0fdbec" protocol=ttrpc version=3 Jan 14 06:04:35.872045 systemd[1]: Started cri-containerd-626b6bb2462ae2824129a74b71a25df3eda4b31738a490f5a5df7a056516426e.scope - libcontainer container 626b6bb2462ae2824129a74b71a25df3eda4b31738a490f5a5df7a056516426e. Jan 14 06:04:35.965000 audit: BPF prog-id=177 op=LOAD Jan 14 06:04:35.971683 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 06:04:35.971821 kernel: audit: type=1334 audit(1768370675.965:574): prog-id=177 op=LOAD Jan 14 06:04:35.965000 audit[3899]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:35.992701 kernel: audit: type=1300 audit(1768370675.965:574): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:35.992811 kernel: audit: type=1327 audit(1768370675.965:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:35.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:36.010366 kernel: audit: type=1334 audit(1768370675.965:575): prog-id=178 op=LOAD Jan 14 06:04:35.965000 audit: BPF prog-id=178 op=LOAD Jan 14 06:04:35.965000 audit[3899]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:35.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:36.035744 kernel: audit: type=1300 audit(1768370675.965:575): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:36.035860 kernel: audit: type=1327 audit(1768370675.965:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:35.965000 audit: BPF prog-id=178 op=UNLOAD Jan 14 06:04:36.039116 kernel: audit: type=1334 audit(1768370675.965:576): prog-id=178 op=UNLOAD Jan 14 06:04:35.965000 audit[3899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:36.051085 kernel: audit: type=1300 audit(1768370675.965:576): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:36.063689 kernel: audit: type=1327 audit(1768370675.965:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:35.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:36.067253 kernel: audit: type=1334 audit(1768370675.965:577): prog-id=177 op=UNLOAD Jan 14 06:04:35.965000 audit: BPF prog-id=177 op=UNLOAD Jan 14 06:04:35.965000 audit[3899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:35.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:35.965000 audit: BPF prog-id=179 op=LOAD Jan 14 06:04:35.965000 audit[3899]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3358 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:35.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632366236626232343632616532383234313239613734623731613235 Jan 14 06:04:36.070222 containerd[1601]: time="2026-01-14T06:04:36.070069116Z" level=info msg="StartContainer for \"626b6bb2462ae2824129a74b71a25df3eda4b31738a490f5a5df7a056516426e\" returns successfully" Jan 14 06:04:36.199931 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 06:04:36.200109 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 06:04:36.540112 kubelet[2756]: I0114 06:04:36.539927 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-backend-key-pair\") pod \"87e21363-a0de-43a7-93ef-08f8312a793d\" (UID: \"87e21363-a0de-43a7-93ef-08f8312a793d\") " Jan 14 06:04:36.540902 kubelet[2756]: I0114 06:04:36.540294 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-ca-bundle\") pod \"87e21363-a0de-43a7-93ef-08f8312a793d\" (UID: \"87e21363-a0de-43a7-93ef-08f8312a793d\") " Jan 14 06:04:36.540902 kubelet[2756]: I0114 06:04:36.540330 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c5fz\" (UniqueName: \"kubernetes.io/projected/87e21363-a0de-43a7-93ef-08f8312a793d-kube-api-access-4c5fz\") pod \"87e21363-a0de-43a7-93ef-08f8312a793d\" (UID: \"87e21363-a0de-43a7-93ef-08f8312a793d\") " Jan 14 06:04:36.542494 kubelet[2756]: I0114 06:04:36.542411 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "87e21363-a0de-43a7-93ef-08f8312a793d" (UID: "87e21363-a0de-43a7-93ef-08f8312a793d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 06:04:36.551697 kubelet[2756]: I0114 06:04:36.551299 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e21363-a0de-43a7-93ef-08f8312a793d-kube-api-access-4c5fz" (OuterVolumeSpecName: "kube-api-access-4c5fz") pod "87e21363-a0de-43a7-93ef-08f8312a793d" (UID: "87e21363-a0de-43a7-93ef-08f8312a793d"). InnerVolumeSpecName "kube-api-access-4c5fz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 06:04:36.553140 kubelet[2756]: I0114 06:04:36.553000 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "87e21363-a0de-43a7-93ef-08f8312a793d" (UID: "87e21363-a0de-43a7-93ef-08f8312a793d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 06:04:36.641632 kubelet[2756]: I0114 06:04:36.641557 2756 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 06:04:36.641632 kubelet[2756]: I0114 06:04:36.641595 2756 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e21363-a0de-43a7-93ef-08f8312a793d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 06:04:36.641632 kubelet[2756]: I0114 06:04:36.641606 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4c5fz\" (UniqueName: \"kubernetes.io/projected/87e21363-a0de-43a7-93ef-08f8312a793d-kube-api-access-4c5fz\") on node \"localhost\" DevicePath \"\"" Jan 14 06:04:36.681369 systemd[1]: var-lib-kubelet-pods-87e21363\x2da0de\x2d43a7\x2d93ef\x2d08f8312a793d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4c5fz.mount: Deactivated successfully. Jan 14 06:04:36.681932 systemd[1]: var-lib-kubelet-pods-87e21363\x2da0de\x2d43a7\x2d93ef\x2d08f8312a793d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 06:04:36.790642 kubelet[2756]: E0114 06:04:36.790035 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:36.800246 systemd[1]: Removed slice kubepods-besteffort-pod87e21363_a0de_43a7_93ef_08f8312a793d.slice - libcontainer container kubepods-besteffort-pod87e21363_a0de_43a7_93ef_08f8312a793d.slice. Jan 14 06:04:36.834467 kubelet[2756]: I0114 06:04:36.834312 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tq48t" podStartSLOduration=2.212605549 podStartE2EDuration="16.834293774s" podCreationTimestamp="2026-01-14 06:04:20 +0000 UTC" firstStartedPulling="2026-01-14 06:04:21.127346315 +0000 UTC m=+23.748030746" lastFinishedPulling="2026-01-14 06:04:35.74903454 +0000 UTC m=+38.369718971" observedRunningTime="2026-01-14 06:04:36.832049477 +0000 UTC m=+39.452733929" watchObservedRunningTime="2026-01-14 06:04:36.834293774 +0000 UTC m=+39.454978206" Jan 14 06:04:36.977765 systemd[1]: Created slice kubepods-besteffort-poddb149086_b13e_4a98_bab8_a1cf713424f8.slice - libcontainer container kubepods-besteffort-poddb149086_b13e_4a98_bab8_a1cf713424f8.slice. Jan 14 06:04:37.045157 kubelet[2756]: I0114 06:04:37.044946 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db149086-b13e-4a98-bab8-a1cf713424f8-whisker-backend-key-pair\") pod \"whisker-6794498458-8d9kg\" (UID: \"db149086-b13e-4a98-bab8-a1cf713424f8\") " pod="calico-system/whisker-6794498458-8d9kg" Jan 14 06:04:37.045419 kubelet[2756]: I0114 06:04:37.045317 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db149086-b13e-4a98-bab8-a1cf713424f8-whisker-ca-bundle\") pod \"whisker-6794498458-8d9kg\" (UID: \"db149086-b13e-4a98-bab8-a1cf713424f8\") " pod="calico-system/whisker-6794498458-8d9kg" Jan 14 06:04:37.045419 kubelet[2756]: I0114 06:04:37.045392 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtbj\" (UniqueName: \"kubernetes.io/projected/db149086-b13e-4a98-bab8-a1cf713424f8-kube-api-access-qdtbj\") pod \"whisker-6794498458-8d9kg\" (UID: \"db149086-b13e-4a98-bab8-a1cf713424f8\") " pod="calico-system/whisker-6794498458-8d9kg" Jan 14 06:04:37.284842 containerd[1601]: time="2026-01-14T06:04:37.284800833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6794498458-8d9kg,Uid:db149086-b13e-4a98-bab8-a1cf713424f8,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:37.562689 kubelet[2756]: I0114 06:04:37.562338 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e21363-a0de-43a7-93ef-08f8312a793d" path="/var/lib/kubelet/pods/87e21363-a0de-43a7-93ef-08f8312a793d/volumes" Jan 14 06:04:37.572138 systemd-networkd[1505]: cali78e9f2073c5: Link UP Jan 14 06:04:37.573188 systemd-networkd[1505]: cali78e9f2073c5: Gained carrier Jan 14 06:04:37.601277 containerd[1601]: 2026-01-14 06:04:37.323 [INFO][3966] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:04:37.601277 containerd[1601]: 2026-01-14 06:04:37.356 [INFO][3966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6794498458--8d9kg-eth0 whisker-6794498458- calico-system db149086-b13e-4a98-bab8-a1cf713424f8 906 0 2026-01-14 06:04:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6794498458 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6794498458-8d9kg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali78e9f2073c5 [] [] }} ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-" Jan 14 06:04:37.601277 containerd[1601]: 2026-01-14 06:04:37.356 [INFO][3966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-eth0" Jan 14 06:04:37.601277 containerd[1601]: 2026-01-14 06:04:37.474 [INFO][3982] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" HandleID="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Workload="localhost-k8s-whisker--6794498458--8d9kg-eth0" Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.475 [INFO][3982] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" HandleID="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Workload="localhost-k8s-whisker--6794498458--8d9kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6794498458-8d9kg", "timestamp":"2026-01-14 06:04:37.474215755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.475 [INFO][3982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.475 [INFO][3982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.476 [INFO][3982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.492 [INFO][3982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" host="localhost" Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.502 [INFO][3982] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.510 [INFO][3982] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.516 [INFO][3982] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.518 [INFO][3982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:37.601746 containerd[1601]: 2026-01-14 06:04:37.519 [INFO][3982] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" host="localhost" Jan 14 06:04:37.602234 containerd[1601]: 2026-01-14 06:04:37.521 [INFO][3982] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34 Jan 14 06:04:37.602234 containerd[1601]: 2026-01-14 06:04:37.537 [INFO][3982] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" host="localhost" Jan 14 06:04:37.602234 containerd[1601]: 2026-01-14 06:04:37.551 [INFO][3982] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" host="localhost" Jan 14 06:04:37.602234 containerd[1601]: 2026-01-14 06:04:37.551 [INFO][3982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" host="localhost" Jan 14 06:04:37.602234 containerd[1601]: 2026-01-14 06:04:37.551 [INFO][3982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:37.602234 containerd[1601]: 2026-01-14 06:04:37.551 [INFO][3982] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" HandleID="k8s-pod-network.695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Workload="localhost-k8s-whisker--6794498458--8d9kg-eth0" Jan 14 06:04:37.602468 containerd[1601]: 2026-01-14 06:04:37.555 [INFO][3966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6794498458--8d9kg-eth0", GenerateName:"whisker-6794498458-", Namespace:"calico-system", SelfLink:"", UID:"db149086-b13e-4a98-bab8-a1cf713424f8", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6794498458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6794498458-8d9kg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali78e9f2073c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:37.602468 containerd[1601]: 2026-01-14 06:04:37.556 [INFO][3966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-eth0" Jan 14 06:04:37.602746 containerd[1601]: 2026-01-14 06:04:37.556 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78e9f2073c5 ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-eth0" Jan 14 06:04:37.602746 containerd[1601]: 2026-01-14 06:04:37.573 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-eth0" Jan 14 06:04:37.602878 containerd[1601]: 2026-01-14 06:04:37.573 [INFO][3966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6794498458--8d9kg-eth0", GenerateName:"whisker-6794498458-", Namespace:"calico-system", SelfLink:"", UID:"db149086-b13e-4a98-bab8-a1cf713424f8", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6794498458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34", Pod:"whisker-6794498458-8d9kg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali78e9f2073c5", MAC:"32:13:55:3e:9c:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:37.603019 containerd[1601]: 2026-01-14 06:04:37.596 [INFO][3966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" Namespace="calico-system" Pod="whisker-6794498458-8d9kg" WorkloadEndpoint="localhost-k8s-whisker--6794498458--8d9kg-eth0" Jan 14 06:04:37.695209 containerd[1601]: time="2026-01-14T06:04:37.695132648Z" level=info msg="connecting to shim 695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34" address="unix:///run/containerd/s/4f68fb4cc1b60b99034104422a5874b3ad582fa46d5dfac31551f5bd0b199500" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:37.739942 systemd[1]: Started cri-containerd-695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34.scope - libcontainer container 695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34. Jan 14 06:04:37.758000 audit: BPF prog-id=180 op=LOAD Jan 14 06:04:37.759000 audit: BPF prog-id=181 op=LOAD Jan 14 06:04:37.759000 audit[4014]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4004 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:37.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639356463663632666636323536323135363262653161646534346333 Jan 14 06:04:37.760000 audit: BPF prog-id=181 op=UNLOAD Jan 14 06:04:37.760000 audit[4014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4004 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:37.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639356463663632666636323536323135363262653161646534346333 Jan 14 06:04:37.760000 audit: BPF prog-id=182 op=LOAD Jan 14 06:04:37.760000 audit[4014]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4004 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:37.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639356463663632666636323536323135363262653161646534346333 Jan 14 06:04:37.760000 audit: BPF prog-id=183 op=LOAD Jan 14 06:04:37.760000 audit[4014]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4004 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:37.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639356463663632666636323536323135363262653161646534346333 Jan 14 06:04:37.761000 audit: BPF prog-id=183 op=UNLOAD Jan 14 06:04:37.761000 audit[4014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4004 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:37.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639356463663632666636323536323135363262653161646534346333 Jan 14 06:04:37.761000 audit: BPF prog-id=182 op=UNLOAD Jan 14 06:04:37.761000 audit[4014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4004 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:37.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639356463663632666636323536323135363262653161646534346333 Jan 14 06:04:37.761000 audit: BPF prog-id=184 op=LOAD Jan 14 06:04:37.761000 audit[4014]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4004 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:37.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639356463663632666636323536323135363262653161646534346333 Jan 14 06:04:37.763672 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:37.793518 kubelet[2756]: I0114 06:04:37.793491 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:04:37.795365 kubelet[2756]: E0114 06:04:37.795286 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:37.819870 containerd[1601]: time="2026-01-14T06:04:37.819530376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6794498458-8d9kg,Uid:db149086-b13e-4a98-bab8-a1cf713424f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"695dcf62ff625621562be1ade44c3987c8b5c8721d096f722fe44408b2837e34\"" Jan 14 06:04:37.826418 containerd[1601]: time="2026-01-14T06:04:37.826197279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:04:37.904133 containerd[1601]: time="2026-01-14T06:04:37.904011226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:37.906893 containerd[1601]: time="2026-01-14T06:04:37.906779900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:04:37.907972 containerd[1601]: time="2026-01-14T06:04:37.907100847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:37.908164 kubelet[2756]: E0114 06:04:37.908034 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:04:37.908164 kubelet[2756]: E0114 06:04:37.908139 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:04:37.923453 kubelet[2756]: E0114 06:04:37.923291 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c30e3dd0f1294436b07d32775ce4f267,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:37.933728 containerd[1601]: time="2026-01-14T06:04:37.933017286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:04:38.015629 containerd[1601]: time="2026-01-14T06:04:38.015456532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:38.021956 containerd[1601]: time="2026-01-14T06:04:38.021723407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:04:38.022128 containerd[1601]: time="2026-01-14T06:04:38.022091568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:38.022448 kubelet[2756]: E0114 06:04:38.022392 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:04:38.022448 kubelet[2756]: E0114 06:04:38.022440 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:04:38.022671 kubelet[2756]: E0114 06:04:38.022538 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:38.024213 kubelet[2756]: E0114 06:04:38.024173 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:04:38.180000 audit: BPF prog-id=185 op=LOAD Jan 14 06:04:38.180000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff522df7d0 a2=98 a3=1fffffffffffffff items=0 ppid=4080 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:04:38.180000 audit: BPF prog-id=185 op=UNLOAD Jan 14 06:04:38.180000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff522df7a0 a3=0 items=0 ppid=4080 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:04:38.180000 audit: BPF prog-id=186 op=LOAD Jan 14 06:04:38.180000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff522df6b0 a2=94 a3=3 items=0 ppid=4080 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:04:38.181000 audit: BPF prog-id=186 op=UNLOAD Jan 14 06:04:38.181000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff522df6b0 a2=94 a3=3 items=0 ppid=4080 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.181000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:04:38.181000 audit: BPF prog-id=187 op=LOAD Jan 14 06:04:38.181000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff522df6f0 a2=94 a3=7fff522df8d0 items=0 ppid=4080 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.181000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:04:38.181000 audit: BPF prog-id=187 op=UNLOAD Jan 14 06:04:38.181000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff522df6f0 a2=94 a3=7fff522df8d0 items=0 ppid=4080 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.181000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:04:38.185000 audit: BPF prog-id=188 op=LOAD Jan 14 06:04:38.185000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcc78c250 a2=98 a3=3 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.185000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.185000 audit: BPF prog-id=188 op=UNLOAD Jan 14 06:04:38.185000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffcc78c220 a3=0 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.185000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.185000 audit: BPF prog-id=189 op=LOAD Jan 14 06:04:38.185000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcc78c040 a2=94 a3=54428f items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.185000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.185000 audit: BPF prog-id=189 op=UNLOAD Jan 14 06:04:38.185000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffcc78c040 a2=94 a3=54428f items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.185000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.185000 audit: BPF prog-id=190 op=LOAD Jan 14 06:04:38.185000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcc78c070 a2=94 a3=2 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.185000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.185000 audit: BPF prog-id=190 op=UNLOAD Jan 14 06:04:38.185000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffcc78c070 a2=0 a3=2 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.185000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.407000 audit: BPF prog-id=191 op=LOAD Jan 14 06:04:38.407000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcc78bf30 a2=94 a3=1 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.407000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.407000 audit: BPF prog-id=191 op=UNLOAD Jan 14 06:04:38.407000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffcc78bf30 a2=94 a3=1 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.407000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.416000 audit: BPF prog-id=192 op=LOAD Jan 14 06:04:38.416000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffcc78bf20 a2=94 a3=4 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.416000 audit: BPF prog-id=192 op=UNLOAD Jan 14 06:04:38.416000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffcc78bf20 a2=0 a3=4 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.417000 audit: BPF prog-id=193 op=LOAD Jan 14 06:04:38.417000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffcc78bd80 a2=94 a3=5 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.417000 audit: BPF prog-id=193 op=UNLOAD Jan 14 06:04:38.417000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffcc78bd80 a2=0 a3=5 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.417000 audit: BPF prog-id=194 op=LOAD Jan 14 06:04:38.417000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffcc78bfa0 a2=94 a3=6 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.417000 audit: BPF prog-id=194 op=UNLOAD Jan 14 06:04:38.417000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffcc78bfa0 a2=0 a3=6 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.417000 audit: BPF prog-id=195 op=LOAD Jan 14 06:04:38.417000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffcc78b750 a2=94 a3=88 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.418000 audit: BPF prog-id=196 op=LOAD Jan 14 06:04:38.418000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffcc78b5d0 a2=94 a3=2 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.418000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.418000 audit: BPF prog-id=196 op=UNLOAD Jan 14 06:04:38.418000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffcc78b600 a2=0 a3=7fffcc78b700 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.418000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.418000 audit: BPF prog-id=195 op=UNLOAD Jan 14 06:04:38.418000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=30b17d10 a2=0 a3=eb42cbacc339bca1 items=0 ppid=4080 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.418000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:04:38.433000 audit: BPF prog-id=197 op=LOAD Jan 14 06:04:38.433000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8262fa50 a2=98 a3=1999999999999999 items=0 ppid=4080 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.433000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:04:38.434000 audit: BPF prog-id=197 op=UNLOAD Jan 14 06:04:38.434000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff8262fa20 a3=0 items=0 ppid=4080 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.434000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:04:38.434000 audit: BPF prog-id=198 op=LOAD Jan 14 06:04:38.434000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8262f930 a2=94 a3=ffff items=0 ppid=4080 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.434000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:04:38.434000 audit: BPF prog-id=198 op=UNLOAD Jan 14 06:04:38.434000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8262f930 a2=94 a3=ffff items=0 ppid=4080 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.434000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:04:38.434000 audit: BPF prog-id=199 op=LOAD Jan 14 06:04:38.434000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8262f970 a2=94 a3=7fff8262fb50 items=0 ppid=4080 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.434000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:04:38.434000 audit: BPF prog-id=199 op=UNLOAD Jan 14 06:04:38.434000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8262f970 a2=94 a3=7fff8262fb50 items=0 ppid=4080 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.434000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:04:38.535886 systemd-networkd[1505]: vxlan.calico: Link UP Jan 14 06:04:38.535919 systemd-networkd[1505]: vxlan.calico: Gained carrier Jan 14 06:04:38.572000 audit: BPF prog-id=200 op=LOAD Jan 14 06:04:38.572000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd19c7080 a2=98 a3=0 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.572000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.573000 audit: BPF prog-id=200 op=UNLOAD Jan 14 06:04:38.573000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd19c7050 a3=0 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.573000 audit: BPF prog-id=201 op=LOAD Jan 14 06:04:38.573000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd19c6e90 a2=94 a3=54428f items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.573000 audit: BPF prog-id=201 op=UNLOAD Jan 14 06:04:38.573000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd19c6e90 a2=94 a3=54428f items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.573000 audit: BPF prog-id=202 op=LOAD Jan 14 06:04:38.573000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd19c6ec0 a2=94 a3=2 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.574000 audit: BPF prog-id=202 op=UNLOAD Jan 14 06:04:38.574000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd19c6ec0 a2=0 a3=2 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.574000 audit: BPF prog-id=203 op=LOAD Jan 14 06:04:38.574000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd19c6c70 a2=94 a3=4 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.574000 audit: BPF prog-id=203 op=UNLOAD Jan 14 06:04:38.574000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd19c6c70 a2=94 a3=4 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.574000 audit: BPF prog-id=204 op=LOAD Jan 14 06:04:38.574000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd19c6d70 a2=94 a3=7ffcd19c6ef0 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.574000 audit: BPF prog-id=204 op=UNLOAD Jan 14 06:04:38.574000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd19c6d70 a2=0 a3=7ffcd19c6ef0 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.576000 audit: BPF prog-id=205 op=LOAD Jan 14 06:04:38.576000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd19c64a0 a2=94 a3=2 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.576000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.577000 audit: BPF prog-id=205 op=UNLOAD Jan 14 06:04:38.577000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd19c64a0 a2=0 a3=2 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.577000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.577000 audit: BPF prog-id=206 op=LOAD Jan 14 06:04:38.577000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd19c65a0 a2=94 a3=30 items=0 ppid=4080 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.577000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:04:38.586000 audit: BPF prog-id=207 op=LOAD Jan 14 06:04:38.586000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6e2e4ef0 a2=98 a3=0 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.586000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.586000 audit: BPF prog-id=207 op=UNLOAD Jan 14 06:04:38.586000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6e2e4ec0 a3=0 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.586000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.587000 audit: BPF prog-id=208 op=LOAD Jan 14 06:04:38.587000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6e2e4ce0 a2=94 a3=54428f items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.587000 audit: BPF prog-id=208 op=UNLOAD Jan 14 06:04:38.587000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6e2e4ce0 a2=94 a3=54428f items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.587000 audit: BPF prog-id=209 op=LOAD Jan 14 06:04:38.587000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6e2e4d10 a2=94 a3=2 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.587000 audit: BPF prog-id=209 op=UNLOAD Jan 14 06:04:38.587000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6e2e4d10 a2=0 a3=2 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.587000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.768000 audit: BPF prog-id=210 op=LOAD Jan 14 06:04:38.768000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6e2e4bd0 a2=94 a3=1 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.768000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.769000 audit: BPF prog-id=210 op=UNLOAD Jan 14 06:04:38.769000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc6e2e4bd0 a2=94 a3=1 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.769000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.778000 audit: BPF prog-id=211 op=LOAD Jan 14 06:04:38.778000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6e2e4bc0 a2=94 a3=4 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.778000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.778000 audit: BPF prog-id=211 op=UNLOAD Jan 14 06:04:38.778000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc6e2e4bc0 a2=0 a3=4 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.778000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.778000 audit: BPF prog-id=212 op=LOAD Jan 14 06:04:38.778000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6e2e4a20 a2=94 a3=5 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.778000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.779000 audit: BPF prog-id=212 op=UNLOAD Jan 14 06:04:38.779000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6e2e4a20 a2=0 a3=5 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.779000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.779000 audit: BPF prog-id=213 op=LOAD Jan 14 06:04:38.779000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6e2e4c40 a2=94 a3=6 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.779000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.779000 audit: BPF prog-id=213 op=UNLOAD Jan 14 06:04:38.779000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc6e2e4c40 a2=0 a3=6 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.779000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.779000 audit: BPF prog-id=214 op=LOAD Jan 14 06:04:38.779000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6e2e43f0 a2=94 a3=88 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.779000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.779000 audit: BPF prog-id=215 op=LOAD Jan 14 06:04:38.779000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc6e2e4270 a2=94 a3=2 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.779000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.779000 audit: BPF prog-id=215 op=UNLOAD Jan 14 06:04:38.779000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc6e2e42a0 a2=0 a3=7ffc6e2e43a0 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.779000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.780000 audit: BPF prog-id=214 op=UNLOAD Jan 14 06:04:38.780000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3adf9d10 a2=0 a3=5759f9b6078f8582 items=0 ppid=4080 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.780000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:04:38.793000 audit: BPF prog-id=206 op=UNLOAD Jan 14 06:04:38.793000 audit[4080]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00077ef40 a2=0 a3=0 items=0 ppid=4041 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.793000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 06:04:38.812233 kubelet[2756]: E0114 06:04:38.812143 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:04:38.870000 audit[4227]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:38.870000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbc34f610 a2=0 a3=7ffdbc34f5fc items=0 ppid=2918 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:38.875000 audit[4227]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:38.875000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdbc34f610 a2=0 a3=0 items=0 ppid=2918 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:38.888000 audit[4236]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4236 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:38.888000 audit[4236]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcd36cd230 a2=0 a3=7ffcd36cd21c items=0 ppid=4080 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.888000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:38.889000 audit[4235]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4235 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:38.889000 audit[4235]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffec0157520 a2=0 a3=55a90899a000 items=0 ppid=4080 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.889000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:38.899000 audit[4234]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:38.899000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe35eef590 a2=0 a3=7ffe35eef57c items=0 ppid=4080 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.899000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:38.907000 audit[4237]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:38.907000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdaaeb33d0 a2=0 a3=7ffdaaeb33bc items=0 ppid=4080 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:38.907000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:39.533094 systemd-networkd[1505]: cali78e9f2073c5: Gained IPv6LL Jan 14 06:04:39.809042 kubelet[2756]: E0114 06:04:39.808856 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:04:40.429039 systemd-networkd[1505]: vxlan.calico: Gained IPv6LL Jan 14 06:04:40.559271 containerd[1601]: time="2026-01-14T06:04:40.559226090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gbcxx,Uid:0e777262-7a52-479a-bfac-2fd2fb722412,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:40.559937 containerd[1601]: time="2026-01-14T06:04:40.559441077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-lft2v,Uid:3724f055-c35a-48ef-a153-ecc79aaf3801,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:04:40.559937 containerd[1601]: time="2026-01-14T06:04:40.559230019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgth2,Uid:1fd1f2cb-320b-495b-b1a9-bd981c71562f,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:40.834188 systemd-networkd[1505]: cali8a36726a986: Link UP Jan 14 06:04:40.834770 systemd-networkd[1505]: cali8a36726a986: Gained carrier Jan 14 06:04:40.867133 containerd[1601]: 2026-01-14 06:04:40.663 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sgth2-eth0 csi-node-driver- calico-system 1fd1f2cb-320b-495b-b1a9-bd981c71562f 721 0 2026-01-14 06:04:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sgth2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8a36726a986 [] [] }} ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-" Jan 14 06:04:40.867133 containerd[1601]: 2026-01-14 06:04:40.663 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-eth0" Jan 14 06:04:40.867133 containerd[1601]: 2026-01-14 06:04:40.733 [INFO][4297] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" HandleID="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Workload="localhost-k8s-csi--node--driver--sgth2-eth0" Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.734 [INFO][4297] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" HandleID="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Workload="localhost-k8s-csi--node--driver--sgth2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sgth2", "timestamp":"2026-01-14 06:04:40.733458667 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.734 [INFO][4297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.734 [INFO][4297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.734 [INFO][4297] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.748 [INFO][4297] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" host="localhost" Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.760 [INFO][4297] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.770 [INFO][4297] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.774 [INFO][4297] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.781 [INFO][4297] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:40.867859 containerd[1601]: 2026-01-14 06:04:40.781 [INFO][4297] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" host="localhost" Jan 14 06:04:40.868188 containerd[1601]: 2026-01-14 06:04:40.785 [INFO][4297] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb Jan 14 06:04:40.868188 containerd[1601]: 2026-01-14 06:04:40.795 [INFO][4297] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" host="localhost" Jan 14 06:04:40.868188 containerd[1601]: 2026-01-14 06:04:40.820 [INFO][4297] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" host="localhost" Jan 14 06:04:40.868188 containerd[1601]: 2026-01-14 06:04:40.821 [INFO][4297] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" host="localhost" Jan 14 06:04:40.868188 containerd[1601]: 2026-01-14 06:04:40.821 [INFO][4297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:40.868188 containerd[1601]: 2026-01-14 06:04:40.822 [INFO][4297] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" HandleID="k8s-pod-network.ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Workload="localhost-k8s-csi--node--driver--sgth2-eth0" Jan 14 06:04:40.868736 containerd[1601]: 2026-01-14 06:04:40.827 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sgth2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1fd1f2cb-320b-495b-b1a9-bd981c71562f", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sgth2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8a36726a986", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:40.868844 containerd[1601]: 2026-01-14 06:04:40.827 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-eth0" Jan 14 06:04:40.868844 containerd[1601]: 2026-01-14 06:04:40.827 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a36726a986 ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-eth0" Jan 14 06:04:40.868844 containerd[1601]: 2026-01-14 06:04:40.834 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-eth0" Jan 14 06:04:40.868919 containerd[1601]: 2026-01-14 06:04:40.838 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sgth2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1fd1f2cb-320b-495b-b1a9-bd981c71562f", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb", Pod:"csi-node-driver-sgth2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8a36726a986", MAC:"62:c5:b6:83:f8:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:40.869014 containerd[1601]: 2026-01-14 06:04:40.863 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" Namespace="calico-system" Pod="csi-node-driver-sgth2" WorkloadEndpoint="localhost-k8s-csi--node--driver--sgth2-eth0" Jan 14 06:04:40.889000 audit[4326]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4326 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:40.889000 audit[4326]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffcade92cf0 a2=0 a3=7ffcade92cdc items=0 ppid=4080 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:40.889000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:40.926167 containerd[1601]: time="2026-01-14T06:04:40.926063741Z" level=info msg="connecting to shim ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb" address="unix:///run/containerd/s/fea13f81b9746c6839dd8dfa9dab44aa8d1dd1d1e53da8ce39cfe57e3e1e6f36" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:40.965522 systemd-networkd[1505]: calie4dedd90dad: Link UP Jan 14 06:04:40.967712 systemd-networkd[1505]: calie4dedd90dad: Gained carrier Jan 14 06:04:41.005545 containerd[1601]: 2026-01-14 06:04:40.645 [INFO][4249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--gbcxx-eth0 goldmane-666569f655- calico-system 0e777262-7a52-479a-bfac-2fd2fb722412 836 0 2026-01-14 06:04:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-gbcxx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie4dedd90dad [] [] }} ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-" Jan 14 06:04:41.005545 containerd[1601]: 2026-01-14 06:04:40.645 [INFO][4249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-eth0" Jan 14 06:04:41.005545 containerd[1601]: 2026-01-14 06:04:40.735 [INFO][4291] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" HandleID="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Workload="localhost-k8s-goldmane--666569f655--gbcxx-eth0" Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.736 [INFO][4291] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" HandleID="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Workload="localhost-k8s-goldmane--666569f655--gbcxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000460e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-gbcxx", "timestamp":"2026-01-14 06:04:40.735905931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.737 [INFO][4291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.822 [INFO][4291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.822 [INFO][4291] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.852 [INFO][4291] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" host="localhost" Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.871 [INFO][4291] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.887 [INFO][4291] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.891 [INFO][4291] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.899 [INFO][4291] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:41.006025 containerd[1601]: 2026-01-14 06:04:40.901 [INFO][4291] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" host="localhost" Jan 14 06:04:41.006535 containerd[1601]: 2026-01-14 06:04:40.906 [INFO][4291] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346 Jan 14 06:04:41.006535 containerd[1601]: 2026-01-14 06:04:40.926 [INFO][4291] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" host="localhost" Jan 14 06:04:41.006535 containerd[1601]: 2026-01-14 06:04:40.945 [INFO][4291] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" host="localhost" Jan 14 06:04:41.006535 containerd[1601]: 2026-01-14 06:04:40.945 [INFO][4291] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" host="localhost" Jan 14 06:04:41.006535 containerd[1601]: 2026-01-14 06:04:40.945 [INFO][4291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:41.006535 containerd[1601]: 2026-01-14 06:04:40.945 [INFO][4291] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" HandleID="k8s-pod-network.09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Workload="localhost-k8s-goldmane--666569f655--gbcxx-eth0" Jan 14 06:04:41.007146 containerd[1601]: 2026-01-14 06:04:40.955 [INFO][4249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--gbcxx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0e777262-7a52-479a-bfac-2fd2fb722412", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-gbcxx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4dedd90dad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:41.007146 containerd[1601]: 2026-01-14 06:04:40.955 [INFO][4249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-eth0" Jan 14 06:04:41.007332 containerd[1601]: 2026-01-14 06:04:40.956 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4dedd90dad ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-eth0" Jan 14 06:04:41.007332 containerd[1601]: 2026-01-14 06:04:40.971 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-eth0" Jan 14 06:04:41.007462 containerd[1601]: 2026-01-14 06:04:40.973 [INFO][4249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--gbcxx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0e777262-7a52-479a-bfac-2fd2fb722412", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346", Pod:"goldmane-666569f655-gbcxx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4dedd90dad", MAC:"8e:e1:a4:1d:7f:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:41.007833 containerd[1601]: 2026-01-14 06:04:41.001 [INFO][4249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" Namespace="calico-system" Pod="goldmane-666569f655-gbcxx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gbcxx-eth0" Jan 14 06:04:41.036175 systemd[1]: Started cri-containerd-ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb.scope - libcontainer container ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb. Jan 14 06:04:41.044000 audit[4367]: NETFILTER_CFG table=filter:126 family=2 entries=48 op=nft_register_chain pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:41.051268 kernel: kauditd_printk_skb: 234 callbacks suppressed Jan 14 06:04:41.051446 kernel: audit: type=1325 audit(1768370681.044:656): table=filter:126 family=2 entries=48 op=nft_register_chain pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:41.044000 audit[4367]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffc4a146710 a2=0 a3=7ffc4a1466fc items=0 ppid=4080 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.075822 kernel: audit: type=1300 audit(1768370681.044:656): arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffc4a146710 a2=0 a3=7ffc4a1466fc items=0 ppid=4080 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.075970 kernel: audit: type=1327 audit(1768370681.044:656): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:41.044000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:41.072000 audit: BPF prog-id=216 op=LOAD Jan 14 06:04:41.089975 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:41.091870 containerd[1601]: time="2026-01-14T06:04:41.091824701Z" level=info msg="connecting to shim 09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346" address="unix:///run/containerd/s/4ba32eb3293d98784189aaf8cb7d2953510d7f773377ae20679f001355a2f05a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:41.095554 kernel: audit: type=1334 audit(1768370681.072:657): prog-id=216 op=LOAD Jan 14 06:04:41.095738 kernel: audit: type=1334 audit(1768370681.076:658): prog-id=217 op=LOAD Jan 14 06:04:41.076000 audit: BPF prog-id=217 op=LOAD Jan 14 06:04:41.076000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.119285 kernel: audit: type=1300 audit(1768370681.076:658): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.119653 kernel: audit: type=1327 audit(1768370681.076:658): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.076000 audit: BPF prog-id=217 op=UNLOAD Jan 14 06:04:41.126675 kernel: audit: type=1334 audit(1768370681.076:659): prog-id=217 op=UNLOAD Jan 14 06:04:41.076000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.151650 kernel: audit: type=1300 audit(1768370681.076:659): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.151746 kernel: audit: type=1327 audit(1768370681.076:659): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.077000 audit: BPF prog-id=218 op=LOAD Jan 14 06:04:41.077000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.077000 audit: BPF prog-id=219 op=LOAD Jan 14 06:04:41.077000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.079000 audit: BPF prog-id=219 op=UNLOAD Jan 14 06:04:41.079000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.079000 audit: BPF prog-id=218 op=UNLOAD Jan 14 06:04:41.079000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.079000 audit: BPF prog-id=220 op=LOAD Jan 14 06:04:41.079000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4336 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162343663643365623561356561386562343430313636643165383162 Jan 14 06:04:41.153380 systemd-networkd[1505]: calid8dad12ea23: Link UP Jan 14 06:04:41.154785 systemd-networkd[1505]: calid8dad12ea23: Gained carrier Jan 14 06:04:41.156543 systemd[1]: Started cri-containerd-09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346.scope - libcontainer container 09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346. Jan 14 06:04:41.174503 containerd[1601]: time="2026-01-14T06:04:41.174347664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgth2,Uid:1fd1f2cb-320b-495b-b1a9-bd981c71562f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab46cd3eb5a5ea8eb440166d1e81b6f9f960797f0a4abb805a036043ace6f9eb\"" Jan 14 06:04:41.181512 containerd[1601]: time="2026-01-14T06:04:41.181413904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:04:41.185811 containerd[1601]: 2026-01-14 06:04:40.687 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0 calico-apiserver-5bfb4c95c8- calico-apiserver 3724f055-c35a-48ef-a153-ecc79aaf3801 835 0 2026-01-14 06:04:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bfb4c95c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bfb4c95c8-lft2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid8dad12ea23 [] [] }} ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-" Jan 14 06:04:41.185811 containerd[1601]: 2026-01-14 06:04:40.688 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" Jan 14 06:04:41.185811 containerd[1601]: 2026-01-14 06:04:40.744 [INFO][4307] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" HandleID="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Workload="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:40.744 [INFO][4307] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" HandleID="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Workload="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042a090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bfb4c95c8-lft2v", "timestamp":"2026-01-14 06:04:40.744028644 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:40.744 [INFO][4307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:40.945 [INFO][4307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:40.945 [INFO][4307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:40.963 [INFO][4307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" host="localhost" Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:40.982 [INFO][4307] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:41.005 [INFO][4307] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:41.020 [INFO][4307] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:41.039 [INFO][4307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:41.186328 containerd[1601]: 2026-01-14 06:04:41.040 [INFO][4307] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" host="localhost" Jan 14 06:04:41.187225 containerd[1601]: 2026-01-14 06:04:41.059 [INFO][4307] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344 Jan 14 06:04:41.187225 containerd[1601]: 2026-01-14 06:04:41.085 [INFO][4307] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" host="localhost" Jan 14 06:04:41.187225 containerd[1601]: 2026-01-14 06:04:41.120 [INFO][4307] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" host="localhost" Jan 14 06:04:41.187225 containerd[1601]: 2026-01-14 06:04:41.120 [INFO][4307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" host="localhost" Jan 14 06:04:41.187225 containerd[1601]: 2026-01-14 06:04:41.120 [INFO][4307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:41.187225 containerd[1601]: 2026-01-14 06:04:41.120 [INFO][4307] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" HandleID="k8s-pod-network.38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Workload="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" Jan 14 06:04:41.187339 containerd[1601]: 2026-01-14 06:04:41.132 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0", GenerateName:"calico-apiserver-5bfb4c95c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3724f055-c35a-48ef-a153-ecc79aaf3801", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb4c95c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bfb4c95c8-lft2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8dad12ea23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:41.187471 containerd[1601]: 2026-01-14 06:04:41.132 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" Jan 14 06:04:41.187471 containerd[1601]: 2026-01-14 06:04:41.132 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8dad12ea23 ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" Jan 14 06:04:41.187471 containerd[1601]: 2026-01-14 06:04:41.156 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" Jan 14 06:04:41.187540 containerd[1601]: 2026-01-14 06:04:41.158 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0", GenerateName:"calico-apiserver-5bfb4c95c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3724f055-c35a-48ef-a153-ecc79aaf3801", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb4c95c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344", Pod:"calico-apiserver-5bfb4c95c8-lft2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8dad12ea23", MAC:"4e:a9:c3:71:bd:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:41.187867 containerd[1601]: 2026-01-14 06:04:41.177 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-lft2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--lft2v-eth0" Jan 14 06:04:41.211000 audit: BPF prog-id=221 op=LOAD Jan 14 06:04:41.214000 audit: BPF prog-id=222 op=LOAD Jan 14 06:04:41.214000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4384 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626333633832613737636465623836646637313165373736333538 Jan 14 06:04:41.215000 audit: BPF prog-id=222 op=UNLOAD Jan 14 06:04:41.215000 audit[4398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4384 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626333633832613737636465623836646637313165373736333538 Jan 14 06:04:41.215000 audit: BPF prog-id=223 op=LOAD Jan 14 06:04:41.215000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4384 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626333633832613737636465623836646637313165373736333538 Jan 14 06:04:41.215000 audit: BPF prog-id=224 op=LOAD Jan 14 06:04:41.215000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4384 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626333633832613737636465623836646637313165373736333538 Jan 14 06:04:41.216000 audit: BPF prog-id=224 op=UNLOAD Jan 14 06:04:41.216000 audit[4398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4384 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626333633832613737636465623836646637313165373736333538 Jan 14 06:04:41.217000 audit: BPF prog-id=223 op=UNLOAD Jan 14 06:04:41.217000 audit[4398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4384 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626333633832613737636465623836646637313165373736333538 Jan 14 06:04:41.218000 audit: BPF prog-id=225 op=LOAD Jan 14 06:04:41.218000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4384 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626333633832613737636465623836646637313165373736333538 Jan 14 06:04:41.223757 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:41.224000 audit[4432]: NETFILTER_CFG table=filter:127 family=2 entries=58 op=nft_register_chain pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:41.224000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffe402859f0 a2=0 a3=7ffe402859dc items=0 ppid=4080 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.224000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:41.240699 containerd[1601]: time="2026-01-14T06:04:41.239207384Z" level=info msg="connecting to shim 38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344" address="unix:///run/containerd/s/f93f89a517393f14d075573ad739e8d4e77fb2542d3efca4ae756e35633d22ed" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:41.278666 containerd[1601]: time="2026-01-14T06:04:41.275358320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:41.278666 containerd[1601]: time="2026-01-14T06:04:41.276842718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:04:41.278666 containerd[1601]: time="2026-01-14T06:04:41.276915726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:41.278836 kubelet[2756]: E0114 06:04:41.277024 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:04:41.278836 kubelet[2756]: E0114 06:04:41.277058 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:04:41.278836 kubelet[2756]: E0114 06:04:41.277159 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:41.282240 containerd[1601]: time="2026-01-14T06:04:41.281842841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:04:41.281992 systemd[1]: Started cri-containerd-38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344.scope - libcontainer container 38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344. Jan 14 06:04:41.299632 containerd[1601]: time="2026-01-14T06:04:41.299512464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gbcxx,Uid:0e777262-7a52-479a-bfac-2fd2fb722412,Namespace:calico-system,Attempt:0,} returns sandbox id \"09bc3c82a77cdeb86df711e776358230dbd3e3c7fa2c870af194ad86c1b19346\"" Jan 14 06:04:41.318000 audit: BPF prog-id=226 op=LOAD Jan 14 06:04:41.319000 audit: BPF prog-id=227 op=LOAD Jan 14 06:04:41.319000 audit[4452]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4441 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338633465316332393763306266646462383632643835366634613335 Jan 14 06:04:41.319000 audit: BPF prog-id=227 op=UNLOAD Jan 14 06:04:41.319000 audit[4452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4441 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338633465316332393763306266646462383632643835366634613335 Jan 14 06:04:41.320000 audit: BPF prog-id=228 op=LOAD Jan 14 06:04:41.320000 audit[4452]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4441 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338633465316332393763306266646462383632643835366634613335 Jan 14 06:04:41.320000 audit: BPF prog-id=229 op=LOAD Jan 14 06:04:41.320000 audit[4452]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4441 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338633465316332393763306266646462383632643835366634613335 Jan 14 06:04:41.320000 audit: BPF prog-id=229 op=UNLOAD Jan 14 06:04:41.320000 audit[4452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4441 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338633465316332393763306266646462383632643835366634613335 Jan 14 06:04:41.320000 audit: BPF prog-id=228 op=UNLOAD Jan 14 06:04:41.320000 audit[4452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4441 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338633465316332393763306266646462383632643835366634613335 Jan 14 06:04:41.320000 audit: BPF prog-id=230 op=LOAD Jan 14 06:04:41.320000 audit[4452]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4441 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338633465316332393763306266646462383632643835366634613335 Jan 14 06:04:41.324293 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:41.356260 containerd[1601]: time="2026-01-14T06:04:41.355990483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:41.357981 containerd[1601]: time="2026-01-14T06:04:41.357953371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:04:41.358109 containerd[1601]: time="2026-01-14T06:04:41.358092042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:41.358837 kubelet[2756]: E0114 06:04:41.358757 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:04:41.358993 kubelet[2756]: E0114 06:04:41.358964 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:04:41.359391 kubelet[2756]: E0114 06:04:41.359336 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:41.361824 kubelet[2756]: E0114 06:04:41.361360 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:41.362104 containerd[1601]: time="2026-01-14T06:04:41.362080550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:04:41.412524 containerd[1601]: time="2026-01-14T06:04:41.412216317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-lft2v,Uid:3724f055-c35a-48ef-a153-ecc79aaf3801,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"38c4e1c297c0bfddb862d856f4a35fdd7c25f726d6c139947edb8c43a563e344\"" Jan 14 06:04:41.431907 containerd[1601]: time="2026-01-14T06:04:41.431822053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:41.433940 containerd[1601]: time="2026-01-14T06:04:41.433823319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:04:41.435083 containerd[1601]: time="2026-01-14T06:04:41.433886952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:41.435666 kubelet[2756]: E0114 06:04:41.434256 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:04:41.435666 kubelet[2756]: E0114 06:04:41.434302 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:04:41.435666 kubelet[2756]: E0114 06:04:41.434723 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wblzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gbcxx_calico-system(0e777262-7a52-479a-bfac-2fd2fb722412): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:41.435947 containerd[1601]: time="2026-01-14T06:04:41.435710584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:04:41.437150 kubelet[2756]: E0114 06:04:41.437086 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:04:41.498369 containerd[1601]: time="2026-01-14T06:04:41.498235161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:41.499959 containerd[1601]: time="2026-01-14T06:04:41.499718207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:04:41.499959 containerd[1601]: time="2026-01-14T06:04:41.499812919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:41.500132 kubelet[2756]: E0114 06:04:41.499984 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:41.500132 kubelet[2756]: E0114 06:04:41.500027 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:41.500252 kubelet[2756]: E0114 06:04:41.500131 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r649q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-lft2v_calico-apiserver(3724f055-c35a-48ef-a153-ecc79aaf3801): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:41.501893 kubelet[2756]: E0114 06:04:41.501824 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:04:41.559201 containerd[1601]: time="2026-01-14T06:04:41.558893030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-29zzb,Uid:c28818ff-c451-40e2-8223-e6f03d8b8188,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:04:41.559201 containerd[1601]: time="2026-01-14T06:04:41.558895747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-997b9f787-7wfms,Uid:c77379be-a206-411d-9fc6-5a9725c3295c,Namespace:calico-system,Attempt:0,}" Jan 14 06:04:41.813136 kubelet[2756]: E0114 06:04:41.812969 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:04:41.823781 kubelet[2756]: E0114 06:04:41.823689 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:04:41.829429 systemd-networkd[1505]: cali097584071a2: Link UP Jan 14 06:04:41.834143 kubelet[2756]: E0114 06:04:41.833981 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:41.835418 systemd-networkd[1505]: cali097584071a2: Gained carrier Jan 14 06:04:41.868916 containerd[1601]: 2026-01-14 06:04:41.656 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0 calico-kube-controllers-997b9f787- calico-system c77379be-a206-411d-9fc6-5a9725c3295c 831 0 2026-01-14 06:04:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:997b9f787 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-997b9f787-7wfms eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali097584071a2 [] [] }} ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-" Jan 14 06:04:41.868916 containerd[1601]: 2026-01-14 06:04:41.657 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" Jan 14 06:04:41.868916 containerd[1601]: 2026-01-14 06:04:41.733 [INFO][4513] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" HandleID="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Workload="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.734 [INFO][4513] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" HandleID="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Workload="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-997b9f787-7wfms", "timestamp":"2026-01-14 06:04:41.733980495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.734 [INFO][4513] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.735 [INFO][4513] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.735 [INFO][4513] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.749 [INFO][4513] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" host="localhost" Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.762 [INFO][4513] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.774 [INFO][4513] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.779 [INFO][4513] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.785 [INFO][4513] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:41.869269 containerd[1601]: 2026-01-14 06:04:41.786 [INFO][4513] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" host="localhost" Jan 14 06:04:41.869676 containerd[1601]: 2026-01-14 06:04:41.790 [INFO][4513] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147 Jan 14 06:04:41.869676 containerd[1601]: 2026-01-14 06:04:41.799 [INFO][4513] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" host="localhost" Jan 14 06:04:41.869676 containerd[1601]: 2026-01-14 06:04:41.816 [INFO][4513] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" host="localhost" Jan 14 06:04:41.869676 containerd[1601]: 2026-01-14 06:04:41.816 [INFO][4513] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" host="localhost" Jan 14 06:04:41.869676 containerd[1601]: 2026-01-14 06:04:41.816 [INFO][4513] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:41.869676 containerd[1601]: 2026-01-14 06:04:41.816 [INFO][4513] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" HandleID="k8s-pod-network.63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Workload="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" Jan 14 06:04:41.869808 containerd[1601]: 2026-01-14 06:04:41.820 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0", GenerateName:"calico-kube-controllers-997b9f787-", Namespace:"calico-system", SelfLink:"", UID:"c77379be-a206-411d-9fc6-5a9725c3295c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"997b9f787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-997b9f787-7wfms", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali097584071a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:41.869912 containerd[1601]: 2026-01-14 06:04:41.820 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" Jan 14 06:04:41.869912 containerd[1601]: 2026-01-14 06:04:41.820 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali097584071a2 ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" Jan 14 06:04:41.869912 containerd[1601]: 2026-01-14 06:04:41.836 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" Jan 14 06:04:41.869987 containerd[1601]: 2026-01-14 06:04:41.837 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0", GenerateName:"calico-kube-controllers-997b9f787-", Namespace:"calico-system", SelfLink:"", UID:"c77379be-a206-411d-9fc6-5a9725c3295c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"997b9f787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147", Pod:"calico-kube-controllers-997b9f787-7wfms", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali097584071a2", MAC:"ca:95:57:11:5e:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:41.870129 containerd[1601]: 2026-01-14 06:04:41.861 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" Namespace="calico-system" Pod="calico-kube-controllers-997b9f787-7wfms" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--997b9f787--7wfms-eth0" Jan 14 06:04:41.882000 audit[4540]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:41.882000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc47feb3b0 a2=0 a3=7ffc47feb39c items=0 ppid=2918 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.882000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:41.889000 audit[4540]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:41.889000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc47feb3b0 a2=0 a3=0 items=0 ppid=2918 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:41.912000 audit[4545]: NETFILTER_CFG table=filter:130 family=2 entries=48 op=nft_register_chain pid=4545 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:41.912000 audit[4545]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffea9efe140 a2=0 a3=7ffea9efe12c items=0 ppid=4080 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.912000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:41.918362 containerd[1601]: time="2026-01-14T06:04:41.918235883Z" level=info msg="connecting to shim 63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147" address="unix:///run/containerd/s/a78204a2fd60a0b33038437a2ee8b29e0bc6ace726c4ee13917194b9e24521a6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:41.956091 systemd[1]: Started cri-containerd-63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147.scope - libcontainer container 63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147. Jan 14 06:04:41.977000 audit[4582]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:41.977000 audit[4582]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff13a96730 a2=0 a3=7fff13a9671c items=0 ppid=2918 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:41.982000 audit: BPF prog-id=231 op=LOAD Jan 14 06:04:41.982000 audit: BPF prog-id=232 op=LOAD Jan 14 06:04:41.982000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4553 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636263396633316635326430366237303638336365623665373530 Jan 14 06:04:41.983000 audit: BPF prog-id=232 op=UNLOAD Jan 14 06:04:41.983000 audit[4564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4553 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636263396633316635326430366237303638336365623665373530 Jan 14 06:04:41.984000 audit: BPF prog-id=233 op=LOAD Jan 14 06:04:41.984000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4553 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636263396633316635326430366237303638336365623665373530 Jan 14 06:04:41.984000 audit: BPF prog-id=234 op=LOAD Jan 14 06:04:41.984000 audit[4582]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:41.984000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4553 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636263396633316635326430366237303638336365623665373530 Jan 14 06:04:41.984000 audit: BPF prog-id=234 op=UNLOAD Jan 14 06:04:41.984000 audit[4564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4553 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636263396633316635326430366237303638336365623665373530 Jan 14 06:04:41.985000 audit: BPF prog-id=233 op=UNLOAD Jan 14 06:04:41.985000 audit[4564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4553 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636263396633316635326430366237303638336365623665373530 Jan 14 06:04:41.985000 audit: BPF prog-id=235 op=LOAD Jan 14 06:04:41.985000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4553 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633636263396633316635326430366237303638336365623665373530 Jan 14 06:04:41.984000 audit[4582]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff13a96730 a2=0 a3=0 items=0 ppid=2918 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:41.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:41.987965 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:42.011941 systemd-networkd[1505]: caliaa13d12462d: Link UP Jan 14 06:04:42.012304 systemd-networkd[1505]: caliaa13d12462d: Gained carrier Jan 14 06:04:42.039314 containerd[1601]: 2026-01-14 06:04:41.655 [INFO][4484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0 calico-apiserver-5bfb4c95c8- calico-apiserver c28818ff-c451-40e2-8223-e6f03d8b8188 838 0 2026-01-14 06:04:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bfb4c95c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bfb4c95c8-29zzb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa13d12462d [] [] }} ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-" Jan 14 06:04:42.039314 containerd[1601]: 2026-01-14 06:04:41.656 [INFO][4484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" Jan 14 06:04:42.039314 containerd[1601]: 2026-01-14 06:04:41.735 [INFO][4515] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" HandleID="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Workload="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.735 [INFO][4515] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" HandleID="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Workload="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bfb4c95c8-29zzb", "timestamp":"2026-01-14 06:04:41.735169095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.735 [INFO][4515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.816 [INFO][4515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.816 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.854 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" host="localhost" Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.914 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.940 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.950 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.965 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:42.040004 containerd[1601]: 2026-01-14 06:04:41.965 [INFO][4515] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" host="localhost" Jan 14 06:04:42.040269 containerd[1601]: 2026-01-14 06:04:41.971 [INFO][4515] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7 Jan 14 06:04:42.040269 containerd[1601]: 2026-01-14 06:04:41.982 [INFO][4515] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" host="localhost" Jan 14 06:04:42.040269 containerd[1601]: 2026-01-14 06:04:42.001 [INFO][4515] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" host="localhost" Jan 14 06:04:42.040269 containerd[1601]: 2026-01-14 06:04:42.001 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" host="localhost" Jan 14 06:04:42.040269 containerd[1601]: 2026-01-14 06:04:42.002 [INFO][4515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:42.040269 containerd[1601]: 2026-01-14 06:04:42.002 [INFO][4515] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" HandleID="k8s-pod-network.5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Workload="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" Jan 14 06:04:42.040381 containerd[1601]: 2026-01-14 06:04:42.006 [INFO][4484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0", GenerateName:"calico-apiserver-5bfb4c95c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c28818ff-c451-40e2-8223-e6f03d8b8188", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb4c95c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bfb4c95c8-29zzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa13d12462d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:42.040471 containerd[1601]: 2026-01-14 06:04:42.006 [INFO][4484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" Jan 14 06:04:42.040471 containerd[1601]: 2026-01-14 06:04:42.006 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa13d12462d ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" Jan 14 06:04:42.040471 containerd[1601]: 2026-01-14 06:04:42.011 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" Jan 14 06:04:42.040542 containerd[1601]: 2026-01-14 06:04:42.012 [INFO][4484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0", GenerateName:"calico-apiserver-5bfb4c95c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c28818ff-c451-40e2-8223-e6f03d8b8188", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb4c95c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7", Pod:"calico-apiserver-5bfb4c95c8-29zzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa13d12462d", MAC:"ba:8f:81:6f:fe:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:42.040693 containerd[1601]: 2026-01-14 06:04:42.035 [INFO][4484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb4c95c8-29zzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb4c95c8--29zzb-eth0" Jan 14 06:04:42.049165 containerd[1601]: time="2026-01-14T06:04:42.049121230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-997b9f787-7wfms,Uid:c77379be-a206-411d-9fc6-5a9725c3295c,Namespace:calico-system,Attempt:0,} returns sandbox id \"63cbc9f31f52d06b70683ceb6e750f51cd5d2a1191f72a604e22741c62596147\"" Jan 14 06:04:42.052282 containerd[1601]: time="2026-01-14T06:04:42.051974303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:04:42.061000 audit[4598]: NETFILTER_CFG table=filter:133 family=2 entries=53 op=nft_register_chain pid=4598 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:42.061000 audit[4598]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7ffc64c53bd0 a2=0 a3=7ffc64c53bbc items=0 ppid=4080 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.061000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:42.088803 containerd[1601]: time="2026-01-14T06:04:42.087987145Z" level=info msg="connecting to shim 5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7" address="unix:///run/containerd/s/0eac8f25f25b575005fef4ff002cc875d86f52ccb3f8eef4b7d986daee71ecdc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:42.116434 containerd[1601]: time="2026-01-14T06:04:42.116396034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:42.121025 containerd[1601]: time="2026-01-14T06:04:42.120938361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:04:42.121127 containerd[1601]: time="2026-01-14T06:04:42.121071531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:42.122732 kubelet[2756]: E0114 06:04:42.122453 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:04:42.122732 kubelet[2756]: E0114 06:04:42.122696 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:04:42.122909 kubelet[2756]: E0114 06:04:42.122836 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qg59x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-997b9f787-7wfms_calico-system(c77379be-a206-411d-9fc6-5a9725c3295c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:42.124951 kubelet[2756]: E0114 06:04:42.124272 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:04:42.143074 systemd[1]: Started cri-containerd-5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7.scope - libcontainer container 5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7. Jan 14 06:04:42.160000 audit: BPF prog-id=236 op=LOAD Jan 14 06:04:42.161000 audit: BPF prog-id=237 op=LOAD Jan 14 06:04:42.161000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4607 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530303965653734396431303736633662333338343530313633336139 Jan 14 06:04:42.161000 audit: BPF prog-id=237 op=UNLOAD Jan 14 06:04:42.161000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530303965653734396431303736633662333338343530313633336139 Jan 14 06:04:42.162000 audit: BPF prog-id=238 op=LOAD Jan 14 06:04:42.162000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4607 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530303965653734396431303736633662333338343530313633336139 Jan 14 06:04:42.162000 audit: BPF prog-id=239 op=LOAD Jan 14 06:04:42.162000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4607 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530303965653734396431303736633662333338343530313633336139 Jan 14 06:04:42.162000 audit: BPF prog-id=239 op=UNLOAD Jan 14 06:04:42.162000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530303965653734396431303736633662333338343530313633336139 Jan 14 06:04:42.162000 audit: BPF prog-id=238 op=UNLOAD Jan 14 06:04:42.162000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4607 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530303965653734396431303736633662333338343530313633336139 Jan 14 06:04:42.162000 audit: BPF prog-id=240 op=LOAD Jan 14 06:04:42.162000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4607 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530303965653734396431303736633662333338343530313633336139 Jan 14 06:04:42.165306 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:42.225630 containerd[1601]: time="2026-01-14T06:04:42.225361763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb4c95c8-29zzb,Uid:c28818ff-c451-40e2-8223-e6f03d8b8188,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5009ee749d1076c6b3384501633a906be037dbee84c350fea07888eda1e353c7\"" Jan 14 06:04:42.227423 containerd[1601]: time="2026-01-14T06:04:42.227256665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:04:42.287716 containerd[1601]: time="2026-01-14T06:04:42.287518163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:42.289179 containerd[1601]: time="2026-01-14T06:04:42.289132340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:04:42.289179 containerd[1601]: time="2026-01-14T06:04:42.289218163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:42.289398 kubelet[2756]: E0114 06:04:42.289361 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:42.290030 kubelet[2756]: E0114 06:04:42.289965 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:42.290221 kubelet[2756]: E0114 06:04:42.290141 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj2rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-29zzb_calico-apiserver(c28818ff-c451-40e2-8223-e6f03d8b8188): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:42.291802 kubelet[2756]: E0114 06:04:42.291729 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:04:42.348936 systemd-networkd[1505]: calie4dedd90dad: Gained IPv6LL Jan 14 06:04:42.540998 systemd-networkd[1505]: cali8a36726a986: Gained IPv6LL Jan 14 06:04:42.558939 kubelet[2756]: E0114 06:04:42.558848 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:42.559365 containerd[1601]: time="2026-01-14T06:04:42.559335733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6h4z,Uid:8c744601-7d6b-423f-ba88-6bef4a7dd5ae,Namespace:kube-system,Attempt:0,}" Jan 14 06:04:42.747969 systemd-networkd[1505]: caliaed7882d629: Link UP Jan 14 06:04:42.748741 systemd-networkd[1505]: caliaed7882d629: Gained carrier Jan 14 06:04:42.778912 containerd[1601]: 2026-01-14 06:04:42.621 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0 coredns-668d6bf9bc- kube-system 8c744601-7d6b-423f-ba88-6bef4a7dd5ae 826 0 2026-01-14 06:04:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m6h4z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaed7882d629 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-" Jan 14 06:04:42.778912 containerd[1601]: 2026-01-14 06:04:42.621 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" Jan 14 06:04:42.778912 containerd[1601]: 2026-01-14 06:04:42.663 [INFO][4659] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" HandleID="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Workload="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.663 [INFO][4659] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" HandleID="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Workload="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m6h4z", "timestamp":"2026-01-14 06:04:42.6632332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.663 [INFO][4659] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.663 [INFO][4659] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.663 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.673 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" host="localhost" Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.686 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.697 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.700 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.705 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:42.781021 containerd[1601]: 2026-01-14 06:04:42.706 [INFO][4659] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" host="localhost" Jan 14 06:04:42.781432 containerd[1601]: 2026-01-14 06:04:42.710 [INFO][4659] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd Jan 14 06:04:42.781432 containerd[1601]: 2026-01-14 06:04:42.723 [INFO][4659] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" host="localhost" Jan 14 06:04:42.781432 containerd[1601]: 2026-01-14 06:04:42.738 [INFO][4659] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" host="localhost" Jan 14 06:04:42.781432 containerd[1601]: 2026-01-14 06:04:42.738 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" host="localhost" Jan 14 06:04:42.781432 containerd[1601]: 2026-01-14 06:04:42.738 [INFO][4659] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:42.781432 containerd[1601]: 2026-01-14 06:04:42.738 [INFO][4659] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" HandleID="k8s-pod-network.8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Workload="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" Jan 14 06:04:42.781777 containerd[1601]: 2026-01-14 06:04:42.742 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c744601-7d6b-423f-ba88-6bef4a7dd5ae", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m6h4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaed7882d629", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:42.781922 containerd[1601]: 2026-01-14 06:04:42.742 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" Jan 14 06:04:42.781922 containerd[1601]: 2026-01-14 06:04:42.742 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaed7882d629 ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" Jan 14 06:04:42.781922 containerd[1601]: 2026-01-14 06:04:42.749 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" Jan 14 06:04:42.782038 containerd[1601]: 2026-01-14 06:04:42.750 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c744601-7d6b-423f-ba88-6bef4a7dd5ae", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd", Pod:"coredns-668d6bf9bc-m6h4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaed7882d629", MAC:"92:68:46:2b:b9:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:42.782038 containerd[1601]: 2026-01-14 06:04:42.774 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-m6h4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m6h4z-eth0" Jan 14 06:04:42.800000 audit[4679]: NETFILTER_CFG table=filter:134 family=2 entries=68 op=nft_register_chain pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:42.800000 audit[4679]: SYSCALL arch=c000003e syscall=46 success=yes exit=31344 a0=3 a1=7ffca9210680 a2=0 a3=7ffca921066c items=0 ppid=4080 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.800000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:42.810788 containerd[1601]: time="2026-01-14T06:04:42.810624530Z" level=info msg="connecting to shim 8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd" address="unix:///run/containerd/s/2dcc7cb87729eeb2664009808f7e378a96cb8fd85432e75d5c8ec34649b7b823" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:42.841370 kubelet[2756]: E0114 06:04:42.841293 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:04:42.842918 systemd[1]: Started cri-containerd-8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd.scope - libcontainer container 8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd. Jan 14 06:04:42.844206 kubelet[2756]: E0114 06:04:42.844134 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:04:42.845357 kubelet[2756]: E0114 06:04:42.845208 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:04:42.847025 kubelet[2756]: E0114 06:04:42.846682 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:04:42.847300 kubelet[2756]: E0114 06:04:42.847273 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:42.863000 audit: BPF prog-id=241 op=LOAD Jan 14 06:04:42.863000 audit: BPF prog-id=242 op=LOAD Jan 14 06:04:42.863000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864663636653230383265336438616639663538376334303337313933 Jan 14 06:04:42.863000 audit: BPF prog-id=242 op=UNLOAD Jan 14 06:04:42.863000 audit[4700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864663636653230383265336438616639663538376334303337313933 Jan 14 06:04:42.863000 audit: BPF prog-id=243 op=LOAD Jan 14 06:04:42.863000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864663636653230383265336438616639663538376334303337313933 Jan 14 06:04:42.864000 audit: BPF prog-id=244 op=LOAD Jan 14 06:04:42.864000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864663636653230383265336438616639663538376334303337313933 Jan 14 06:04:42.864000 audit: BPF prog-id=244 op=UNLOAD Jan 14 06:04:42.864000 audit[4700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864663636653230383265336438616639663538376334303337313933 Jan 14 06:04:42.864000 audit: BPF prog-id=243 op=UNLOAD Jan 14 06:04:42.864000 audit[4700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864663636653230383265336438616639663538376334303337313933 Jan 14 06:04:42.864000 audit: BPF prog-id=245 op=LOAD Jan 14 06:04:42.864000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864663636653230383265336438616639663538376334303337313933 Jan 14 06:04:42.865856 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:42.922904 containerd[1601]: time="2026-01-14T06:04:42.922697204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m6h4z,Uid:8c744601-7d6b-423f-ba88-6bef4a7dd5ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd\"" Jan 14 06:04:42.926857 kubelet[2756]: E0114 06:04:42.926796 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:42.948928 containerd[1601]: time="2026-01-14T06:04:42.948781673Z" level=info msg="CreateContainer within sandbox \"8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 06:04:42.970157 containerd[1601]: time="2026-01-14T06:04:42.970080010Z" level=info msg="Container 1cb3867c6d8948ff75d6cbd9212e48023aa2aab8d23496c57198187971075e9b: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:42.975393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692326115.mount: Deactivated successfully. Jan 14 06:04:42.981696 containerd[1601]: time="2026-01-14T06:04:42.980164461Z" level=info msg="CreateContainer within sandbox \"8df66e2082e3d8af9f587c4037193c8a0370451b0b4680ba28eb34094da616dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cb3867c6d8948ff75d6cbd9212e48023aa2aab8d23496c57198187971075e9b\"" Jan 14 06:04:42.981696 containerd[1601]: time="2026-01-14T06:04:42.981109874Z" level=info msg="StartContainer for \"1cb3867c6d8948ff75d6cbd9212e48023aa2aab8d23496c57198187971075e9b\"" Jan 14 06:04:42.983933 containerd[1601]: time="2026-01-14T06:04:42.983432987Z" level=info msg="connecting to shim 1cb3867c6d8948ff75d6cbd9212e48023aa2aab8d23496c57198187971075e9b" address="unix:///run/containerd/s/2dcc7cb87729eeb2664009808f7e378a96cb8fd85432e75d5c8ec34649b7b823" protocol=ttrpc version=3 Jan 14 06:04:42.994000 audit[4730]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:42.994000 audit[4730]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdaac3c110 a2=0 a3=7ffdaac3c0fc items=0 ppid=2918 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:42.994000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:43.001000 audit[4730]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:43.001000 audit[4730]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdaac3c110 a2=0 a3=0 items=0 ppid=2918 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:43.042972 systemd[1]: Started cri-containerd-1cb3867c6d8948ff75d6cbd9212e48023aa2aab8d23496c57198187971075e9b.scope - libcontainer container 1cb3867c6d8948ff75d6cbd9212e48023aa2aab8d23496c57198187971075e9b. Jan 14 06:04:43.053846 systemd-networkd[1505]: cali097584071a2: Gained IPv6LL Jan 14 06:04:43.075000 audit: BPF prog-id=246 op=LOAD Jan 14 06:04:43.078000 audit: BPF prog-id=247 op=LOAD Jan 14 06:04:43.078000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4689 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163623338363763366438393438666637356436636264393231326534 Jan 14 06:04:43.078000 audit: BPF prog-id=247 op=UNLOAD Jan 14 06:04:43.078000 audit[4731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163623338363763366438393438666637356436636264393231326534 Jan 14 06:04:43.078000 audit: BPF prog-id=248 op=LOAD Jan 14 06:04:43.078000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4689 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163623338363763366438393438666637356436636264393231326534 Jan 14 06:04:43.078000 audit: BPF prog-id=249 op=LOAD Jan 14 06:04:43.078000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4689 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163623338363763366438393438666637356436636264393231326534 Jan 14 06:04:43.078000 audit: BPF prog-id=249 op=UNLOAD Jan 14 06:04:43.078000 audit[4731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163623338363763366438393438666637356436636264393231326534 Jan 14 06:04:43.079000 audit: BPF prog-id=248 op=UNLOAD Jan 14 06:04:43.079000 audit[4731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163623338363763366438393438666637356436636264393231326534 Jan 14 06:04:43.079000 audit: BPF prog-id=250 op=LOAD Jan 14 06:04:43.079000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4689 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163623338363763366438393438666637356436636264393231326534 Jan 14 06:04:43.122138 containerd[1601]: time="2026-01-14T06:04:43.122103114Z" level=info msg="StartContainer for \"1cb3867c6d8948ff75d6cbd9212e48023aa2aab8d23496c57198187971075e9b\" returns successfully" Jan 14 06:04:43.181037 systemd-networkd[1505]: calid8dad12ea23: Gained IPv6LL Jan 14 06:04:43.558247 kubelet[2756]: E0114 06:04:43.558142 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:43.558968 containerd[1601]: time="2026-01-14T06:04:43.558870800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tzhc,Uid:d4551c19-eae2-49ff-b34e-8730e920f6f5,Namespace:kube-system,Attempt:0,}" Jan 14 06:04:43.764200 systemd-networkd[1505]: calif849b247a84: Link UP Jan 14 06:04:43.768470 systemd-networkd[1505]: calif849b247a84: Gained carrier Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.637 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0 coredns-668d6bf9bc- kube-system d4551c19-eae2-49ff-b34e-8730e920f6f5 837 0 2026-01-14 06:04:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9tzhc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif849b247a84 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.637 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.683 [INFO][4781] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" HandleID="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Workload="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.683 [INFO][4781] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" HandleID="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Workload="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130440), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9tzhc", "timestamp":"2026-01-14 06:04:43.683014351 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.683 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.683 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.683 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.694 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.705 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.715 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.718 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.722 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.722 [INFO][4781] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.726 [INFO][4781] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708 Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.737 [INFO][4781] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.754 [INFO][4781] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.754 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" host="localhost" Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.754 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:04:43.799460 containerd[1601]: 2026-01-14 06:04:43.754 [INFO][4781] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" HandleID="k8s-pod-network.9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Workload="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" Jan 14 06:04:43.801150 containerd[1601]: 2026-01-14 06:04:43.758 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4551c19-eae2-49ff-b34e-8730e920f6f5", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9tzhc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif849b247a84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:43.801150 containerd[1601]: 2026-01-14 06:04:43.758 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" Jan 14 06:04:43.801150 containerd[1601]: 2026-01-14 06:04:43.759 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif849b247a84 ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" Jan 14 06:04:43.801150 containerd[1601]: 2026-01-14 06:04:43.769 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" Jan 14 06:04:43.801150 containerd[1601]: 2026-01-14 06:04:43.770 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4551c19-eae2-49ff-b34e-8730e920f6f5", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 4, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708", Pod:"coredns-668d6bf9bc-9tzhc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif849b247a84", MAC:"ea:78:ca:a8:00:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:04:43.801150 containerd[1601]: 2026-01-14 06:04:43.793 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tzhc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9tzhc-eth0" Jan 14 06:04:43.823000 audit[4799]: NETFILTER_CFG table=filter:137 family=2 entries=52 op=nft_register_chain pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:04:43.823000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffc00bf9aa0 a2=0 a3=7ffc00bf9a8c items=0 ppid=4080 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.823000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:04:43.852125 containerd[1601]: time="2026-01-14T06:04:43.851458182Z" level=info msg="connecting to shim 9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708" address="unix:///run/containerd/s/6c761fe64ce73985c1cd42c984f1668cbae30a017bbbdde15e4374b5c5e697d8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:04:43.856246 kubelet[2756]: E0114 06:04:43.856041 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:43.858210 kubelet[2756]: E0114 06:04:43.858171 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:04:43.859024 kubelet[2756]: E0114 06:04:43.858936 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:04:43.913973 systemd[1]: Started cri-containerd-9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708.scope - libcontainer container 9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708. Jan 14 06:04:43.940000 audit: BPF prog-id=251 op=LOAD Jan 14 06:04:43.941000 audit: BPF prog-id=252 op=LOAD Jan 14 06:04:43.941000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4807 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965666337663865616635643562333139643366653734333330323938 Jan 14 06:04:43.941000 audit: BPF prog-id=252 op=UNLOAD Jan 14 06:04:43.941000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4807 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965666337663865616635643562333139643366653734333330323938 Jan 14 06:04:43.941000 audit: BPF prog-id=253 op=LOAD Jan 14 06:04:43.941000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4807 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965666337663865616635643562333139643366653734333330323938 Jan 14 06:04:43.941000 audit: BPF prog-id=254 op=LOAD Jan 14 06:04:43.941000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4807 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965666337663865616635643562333139643366653734333330323938 Jan 14 06:04:43.941000 audit: BPF prog-id=254 op=UNLOAD Jan 14 06:04:43.941000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4807 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965666337663865616635643562333139643366653734333330323938 Jan 14 06:04:43.942000 audit: BPF prog-id=253 op=UNLOAD Jan 14 06:04:43.942000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4807 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965666337663865616635643562333139643366653734333330323938 Jan 14 06:04:43.942000 audit: BPF prog-id=255 op=LOAD Jan 14 06:04:43.942000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4807 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965666337663865616635643562333139643366653734333330323938 Jan 14 06:04:43.944758 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 06:04:43.950070 systemd-networkd[1505]: caliaa13d12462d: Gained IPv6LL Jan 14 06:04:43.968194 kubelet[2756]: I0114 06:04:43.967998 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m6h4z" podStartSLOduration=42.967982174 podStartE2EDuration="42.967982174s" podCreationTimestamp="2026-01-14 06:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:04:43.965794412 +0000 UTC m=+46.586478842" watchObservedRunningTime="2026-01-14 06:04:43.967982174 +0000 UTC m=+46.588666605" Jan 14 06:04:43.995000 audit[4845]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:43.995000 audit[4845]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5c67a550 a2=0 a3=7fff5c67a53c items=0 ppid=2918 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:43.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:44.000000 audit[4845]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:44.000000 audit[4845]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff5c67a550 a2=0 a3=0 items=0 ppid=2918 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:44.024317 containerd[1601]: time="2026-01-14T06:04:44.024228927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tzhc,Uid:d4551c19-eae2-49ff-b34e-8730e920f6f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708\"" Jan 14 06:04:44.025844 kubelet[2756]: E0114 06:04:44.025778 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:44.028754 containerd[1601]: time="2026-01-14T06:04:44.028722873Z" level=info msg="CreateContainer within sandbox \"9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 06:04:44.045378 containerd[1601]: time="2026-01-14T06:04:44.045284817Z" level=info msg="Container ed70229a2530f65cea338fe850b75a2d3c33c0f5a3583ca0f0070fb17af96fce: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:04:44.055002 containerd[1601]: time="2026-01-14T06:04:44.054802107Z" level=info msg="CreateContainer within sandbox \"9efc7f8eaf5d5b319d3fe74330298864dc5c676433a3509ab3f080d1e3cfa708\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed70229a2530f65cea338fe850b75a2d3c33c0f5a3583ca0f0070fb17af96fce\"" Jan 14 06:04:44.056071 containerd[1601]: time="2026-01-14T06:04:44.055749577Z" level=info msg="StartContainer for \"ed70229a2530f65cea338fe850b75a2d3c33c0f5a3583ca0f0070fb17af96fce\"" Jan 14 06:04:44.057958 containerd[1601]: time="2026-01-14T06:04:44.057479470Z" level=info msg="connecting to shim ed70229a2530f65cea338fe850b75a2d3c33c0f5a3583ca0f0070fb17af96fce" address="unix:///run/containerd/s/6c761fe64ce73985c1cd42c984f1668cbae30a017bbbdde15e4374b5c5e697d8" protocol=ttrpc version=3 Jan 14 06:04:44.097862 systemd[1]: Started cri-containerd-ed70229a2530f65cea338fe850b75a2d3c33c0f5a3583ca0f0070fb17af96fce.scope - libcontainer container ed70229a2530f65cea338fe850b75a2d3c33c0f5a3583ca0f0070fb17af96fce. Jan 14 06:04:44.116000 audit: BPF prog-id=256 op=LOAD Jan 14 06:04:44.117000 audit: BPF prog-id=257 op=LOAD Jan 14 06:04:44.117000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4807 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373032323961323533306636356365613333386665383530623735 Jan 14 06:04:44.117000 audit: BPF prog-id=257 op=UNLOAD Jan 14 06:04:44.117000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4807 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373032323961323533306636356365613333386665383530623735 Jan 14 06:04:44.117000 audit: BPF prog-id=258 op=LOAD Jan 14 06:04:44.117000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4807 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373032323961323533306636356365613333386665383530623735 Jan 14 06:04:44.117000 audit: BPF prog-id=259 op=LOAD Jan 14 06:04:44.117000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4807 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373032323961323533306636356365613333386665383530623735 Jan 14 06:04:44.117000 audit: BPF prog-id=259 op=UNLOAD Jan 14 06:04:44.117000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4807 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373032323961323533306636356365613333386665383530623735 Jan 14 06:04:44.117000 audit: BPF prog-id=258 op=UNLOAD Jan 14 06:04:44.117000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4807 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373032323961323533306636356365613333386665383530623735 Jan 14 06:04:44.117000 audit: BPF prog-id=260 op=LOAD Jan 14 06:04:44.117000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4807 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373032323961323533306636356365613333386665383530623735 Jan 14 06:04:44.147143 containerd[1601]: time="2026-01-14T06:04:44.146964323Z" level=info msg="StartContainer for \"ed70229a2530f65cea338fe850b75a2d3c33c0f5a3583ca0f0070fb17af96fce\" returns successfully" Jan 14 06:04:44.333413 systemd-networkd[1505]: caliaed7882d629: Gained IPv6LL Jan 14 06:04:44.861530 kubelet[2756]: E0114 06:04:44.860476 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:44.861530 kubelet[2756]: E0114 06:04:44.861533 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:44.888890 kubelet[2756]: I0114 06:04:44.888661 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9tzhc" podStartSLOduration=43.888462114 podStartE2EDuration="43.888462114s" podCreationTimestamp="2026-01-14 06:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:04:44.885474099 +0000 UTC m=+47.506158540" watchObservedRunningTime="2026-01-14 06:04:44.888462114 +0000 UTC m=+47.509146545" Jan 14 06:04:44.909020 systemd-networkd[1505]: calif849b247a84: Gained IPv6LL Jan 14 06:04:44.916000 audit[4888]: NETFILTER_CFG table=filter:140 family=2 entries=20 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:44.916000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffff533a060 a2=0 a3=7ffff533a04c items=0 ppid=2918 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:44.934000 audit[4888]: NETFILTER_CFG table=nat:141 family=2 entries=14 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:44.934000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffff533a060 a2=0 a3=0 items=0 ppid=2918 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:44.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:45.862993 kubelet[2756]: E0114 06:04:45.862895 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:45.862993 kubelet[2756]: E0114 06:04:45.862895 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:45.965000 audit[4890]: NETFILTER_CFG table=filter:142 family=2 entries=17 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:45.965000 audit[4890]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe3aab2170 a2=0 a3=7ffe3aab215c items=0 ppid=2918 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:45.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:45.981000 audit[4890]: NETFILTER_CFG table=nat:143 family=2 entries=47 op=nft_register_chain pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:04:45.981000 audit[4890]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe3aab2170 a2=0 a3=7ffe3aab215c items=0 ppid=2918 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:04:45.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:04:46.865582 kubelet[2756]: E0114 06:04:46.865495 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:46.865582 kubelet[2756]: E0114 06:04:46.865538 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:47.544935 kubelet[2756]: I0114 06:04:47.544838 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:04:47.545685 kubelet[2756]: E0114 06:04:47.545513 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:47.872150 kubelet[2756]: E0114 06:04:47.871065 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:04:54.577364 containerd[1601]: time="2026-01-14T06:04:54.577272745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:04:54.640940 containerd[1601]: time="2026-01-14T06:04:54.640732650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:54.642978 containerd[1601]: time="2026-01-14T06:04:54.642810089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:04:54.643349 containerd[1601]: time="2026-01-14T06:04:54.642956342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:54.644085 kubelet[2756]: E0114 06:04:54.643981 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:04:54.644085 kubelet[2756]: E0114 06:04:54.644044 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:04:54.646745 kubelet[2756]: E0114 06:04:54.646674 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c30e3dd0f1294436b07d32775ce4f267,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:54.651155 containerd[1601]: time="2026-01-14T06:04:54.650909739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:04:54.733297 containerd[1601]: time="2026-01-14T06:04:54.733021755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:54.737107 containerd[1601]: time="2026-01-14T06:04:54.737064024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:54.737419 containerd[1601]: time="2026-01-14T06:04:54.737250637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:04:54.738118 kubelet[2756]: E0114 06:04:54.737992 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:04:54.738735 kubelet[2756]: E0114 06:04:54.738657 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:04:54.739245 kubelet[2756]: E0114 06:04:54.739079 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:54.742626 kubelet[2756]: E0114 06:04:54.741660 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:04:56.560351 containerd[1601]: time="2026-01-14T06:04:56.560246652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:04:56.620999 containerd[1601]: time="2026-01-14T06:04:56.620810225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:56.622365 containerd[1601]: time="2026-01-14T06:04:56.622204822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:04:56.622365 containerd[1601]: time="2026-01-14T06:04:56.622303486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:56.622640 kubelet[2756]: E0114 06:04:56.622506 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:04:56.622640 kubelet[2756]: E0114 06:04:56.622555 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:04:56.623112 kubelet[2756]: E0114 06:04:56.622746 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:56.625395 containerd[1601]: time="2026-01-14T06:04:56.625169571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:04:56.950348 containerd[1601]: time="2026-01-14T06:04:56.950243177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:56.952894 containerd[1601]: time="2026-01-14T06:04:56.952773155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:04:56.952976 containerd[1601]: time="2026-01-14T06:04:56.952909973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:56.953091 kubelet[2756]: E0114 06:04:56.953015 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:04:56.953091 kubelet[2756]: E0114 06:04:56.953086 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:04:56.953279 kubelet[2756]: E0114 06:04:56.953190 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:56.954756 kubelet[2756]: E0114 06:04:56.954550 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:04:57.560332 containerd[1601]: time="2026-01-14T06:04:57.560225623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:04:57.620467 containerd[1601]: time="2026-01-14T06:04:57.620377453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:57.622370 containerd[1601]: time="2026-01-14T06:04:57.622301709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:04:57.622501 containerd[1601]: time="2026-01-14T06:04:57.622378139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:57.622895 kubelet[2756]: E0114 06:04:57.622527 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:04:57.622895 kubelet[2756]: E0114 06:04:57.622755 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:04:57.623380 kubelet[2756]: E0114 06:04:57.623148 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wblzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gbcxx_calico-system(0e777262-7a52-479a-bfac-2fd2fb722412): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:57.623529 containerd[1601]: time="2026-01-14T06:04:57.623077547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:04:57.625095 kubelet[2756]: E0114 06:04:57.625065 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:04:57.684038 containerd[1601]: time="2026-01-14T06:04:57.683790137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:57.686314 containerd[1601]: time="2026-01-14T06:04:57.685957321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:04:57.686314 containerd[1601]: time="2026-01-14T06:04:57.686130646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:57.687409 kubelet[2756]: E0114 06:04:57.687228 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:04:57.687409 kubelet[2756]: E0114 06:04:57.687299 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:04:57.687541 kubelet[2756]: E0114 06:04:57.687446 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qg59x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-997b9f787-7wfms_calico-system(c77379be-a206-411d-9fc6-5a9725c3295c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:57.688902 kubelet[2756]: E0114 06:04:57.688845 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:04:58.559307 containerd[1601]: time="2026-01-14T06:04:58.559047458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:04:58.616840 containerd[1601]: time="2026-01-14T06:04:58.616718660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:58.618802 containerd[1601]: time="2026-01-14T06:04:58.618517476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:04:58.618802 containerd[1601]: time="2026-01-14T06:04:58.618599191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:58.619243 kubelet[2756]: E0114 06:04:58.619051 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:58.619243 kubelet[2756]: E0114 06:04:58.619120 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:58.620033 containerd[1601]: time="2026-01-14T06:04:58.619978956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:04:58.620687 kubelet[2756]: E0114 06:04:58.620495 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj2rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-29zzb_calico-apiserver(c28818ff-c451-40e2-8223-e6f03d8b8188): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:58.622202 kubelet[2756]: E0114 06:04:58.622157 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:04:58.691997 containerd[1601]: time="2026-01-14T06:04:58.691832324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:04:58.693471 containerd[1601]: time="2026-01-14T06:04:58.693336741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:04:58.693471 containerd[1601]: time="2026-01-14T06:04:58.693436812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:04:58.693907 kubelet[2756]: E0114 06:04:58.693833 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:58.693907 kubelet[2756]: E0114 06:04:58.693889 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:04:58.694305 kubelet[2756]: E0114 06:04:58.694108 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r649q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-lft2v_calico-apiserver(3724f055-c35a-48ef-a153-ecc79aaf3801): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:04:58.695817 kubelet[2756]: E0114 06:04:58.695689 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:05:07.558725 kubelet[2756]: E0114 06:05:07.558351 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:05:07.562734 kubelet[2756]: E0114 06:05:07.561555 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:05:09.559667 kubelet[2756]: E0114 06:05:09.559442 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:05:10.560538 kubelet[2756]: E0114 06:05:10.559855 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:05:10.560538 kubelet[2756]: E0114 06:05:10.559976 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:05:11.560810 kubelet[2756]: E0114 06:05:11.560735 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:05:13.558642 kubelet[2756]: E0114 06:05:13.558497 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:05:13.559484 kubelet[2756]: E0114 06:05:13.559189 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:05:18.562332 containerd[1601]: time="2026-01-14T06:05:18.562172250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:05:18.621208 containerd[1601]: time="2026-01-14T06:05:18.621133844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:18.623955 containerd[1601]: time="2026-01-14T06:05:18.623846481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:05:18.623955 containerd[1601]: time="2026-01-14T06:05:18.623942196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:18.624336 kubelet[2756]: E0114 06:05:18.624282 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:05:18.624336 kubelet[2756]: E0114 06:05:18.624324 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:05:18.624923 kubelet[2756]: E0114 06:05:18.624687 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:18.628010 containerd[1601]: time="2026-01-14T06:05:18.627868447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:05:18.693457 containerd[1601]: time="2026-01-14T06:05:18.693360520Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:18.695648 containerd[1601]: time="2026-01-14T06:05:18.695359145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:05:18.695648 containerd[1601]: time="2026-01-14T06:05:18.695463816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:18.695951 kubelet[2756]: E0114 06:05:18.695849 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:05:18.695951 kubelet[2756]: E0114 06:05:18.695922 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:05:18.696782 kubelet[2756]: E0114 06:05:18.696664 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:18.698418 kubelet[2756]: E0114 06:05:18.698322 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:05:23.559893 containerd[1601]: time="2026-01-14T06:05:23.559737660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:05:23.625260 containerd[1601]: time="2026-01-14T06:05:23.625142790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:23.627116 containerd[1601]: time="2026-01-14T06:05:23.626982768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:05:23.627184 containerd[1601]: time="2026-01-14T06:05:23.627113138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:23.627291 kubelet[2756]: E0114 06:05:23.627232 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:05:23.627291 kubelet[2756]: E0114 06:05:23.627274 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:05:23.627768 kubelet[2756]: E0114 06:05:23.627378 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r649q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-lft2v_calico-apiserver(3724f055-c35a-48ef-a153-ecc79aaf3801): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:23.629021 kubelet[2756]: E0114 06:05:23.628954 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:05:25.560026 containerd[1601]: time="2026-01-14T06:05:25.559820425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:05:25.561075 kubelet[2756]: E0114 06:05:25.560559 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:05:25.622097 containerd[1601]: time="2026-01-14T06:05:25.622023681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:25.623541 containerd[1601]: time="2026-01-14T06:05:25.623393615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:05:25.623541 containerd[1601]: time="2026-01-14T06:05:25.623450031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:25.623821 kubelet[2756]: E0114 06:05:25.623739 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:05:25.623821 kubelet[2756]: E0114 06:05:25.623774 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:05:25.624020 kubelet[2756]: E0114 06:05:25.623926 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c30e3dd0f1294436b07d32775ce4f267,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:25.625004 containerd[1601]: time="2026-01-14T06:05:25.624768457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:05:25.690690 containerd[1601]: time="2026-01-14T06:05:25.689880764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:25.692716 containerd[1601]: time="2026-01-14T06:05:25.692515126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:05:25.692882 containerd[1601]: time="2026-01-14T06:05:25.692811124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:25.694101 kubelet[2756]: E0114 06:05:25.694015 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:05:25.694101 kubelet[2756]: E0114 06:05:25.694094 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:05:25.694765 containerd[1601]: time="2026-01-14T06:05:25.694494969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:05:25.697401 kubelet[2756]: E0114 06:05:25.697095 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qg59x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-997b9f787-7wfms_calico-system(c77379be-a206-411d-9fc6-5a9725c3295c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:25.699321 kubelet[2756]: E0114 06:05:25.698894 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:05:25.755920 containerd[1601]: time="2026-01-14T06:05:25.755763299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:25.757430 containerd[1601]: time="2026-01-14T06:05:25.757310699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:05:25.757499 containerd[1601]: time="2026-01-14T06:05:25.757357491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:25.757847 kubelet[2756]: E0114 06:05:25.757749 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:05:25.757847 kubelet[2756]: E0114 06:05:25.757795 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:05:25.758022 kubelet[2756]: E0114 06:05:25.757890 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:25.759271 kubelet[2756]: E0114 06:05:25.759172 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:05:26.561476 containerd[1601]: time="2026-01-14T06:05:26.561326185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:05:26.622107 containerd[1601]: time="2026-01-14T06:05:26.621876561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:26.623461 containerd[1601]: time="2026-01-14T06:05:26.623332930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:05:26.623461 containerd[1601]: time="2026-01-14T06:05:26.623433613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:26.623779 kubelet[2756]: E0114 06:05:26.623554 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:05:26.623779 kubelet[2756]: E0114 06:05:26.623717 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:05:26.624332 kubelet[2756]: E0114 06:05:26.623862 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wblzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gbcxx_calico-system(0e777262-7a52-479a-bfac-2fd2fb722412): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:26.625265 kubelet[2756]: E0114 06:05:26.625165 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:05:28.561150 containerd[1601]: time="2026-01-14T06:05:28.561024430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:05:28.631885 containerd[1601]: time="2026-01-14T06:05:28.631831822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:05:28.633914 containerd[1601]: time="2026-01-14T06:05:28.633745198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:05:28.633914 containerd[1601]: time="2026-01-14T06:05:28.633845536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:05:28.634172 kubelet[2756]: E0114 06:05:28.634130 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:05:28.634172 kubelet[2756]: E0114 06:05:28.634177 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:05:28.635146 kubelet[2756]: E0114 06:05:28.634288 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj2rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-29zzb_calico-apiserver(c28818ff-c451-40e2-8223-e6f03d8b8188): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:05:28.635612 kubelet[2756]: E0114 06:05:28.635528 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:05:29.563953 kubelet[2756]: E0114 06:05:29.563818 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:05:29.565370 kubelet[2756]: E0114 06:05:29.565292 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:05:35.559678 kubelet[2756]: E0114 06:05:35.559087 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:05:36.560224 kubelet[2756]: E0114 06:05:36.560109 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:05:38.560197 kubelet[2756]: E0114 06:05:38.560079 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:05:38.560197 kubelet[2756]: E0114 06:05:38.560165 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:05:41.560759 kubelet[2756]: E0114 06:05:41.560662 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:05:41.563298 kubelet[2756]: E0114 06:05:41.563258 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:05:42.923992 update_engine[1572]: I20260114 06:05:42.923826 1572 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 06:05:42.923992 update_engine[1572]: I20260114 06:05:42.923961 1572 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 06:05:42.926062 update_engine[1572]: I20260114 06:05:42.926009 1572 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 06:05:42.927008 update_engine[1572]: I20260114 06:05:42.926905 1572 omaha_request_params.cc:62] Current group set to developer Jan 14 06:05:42.927165 update_engine[1572]: I20260114 06:05:42.927120 1572 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 06:05:42.927165 update_engine[1572]: I20260114 06:05:42.927157 1572 update_attempter.cc:643] Scheduling an action processor start. Jan 14 06:05:42.927232 update_engine[1572]: I20260114 06:05:42.927179 1572 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 06:05:42.927252 update_engine[1572]: I20260114 06:05:42.927233 1572 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 06:05:42.927351 update_engine[1572]: I20260114 06:05:42.927304 1572 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 06:05:42.927351 update_engine[1572]: I20260114 06:05:42.927345 1572 omaha_request_action.cc:272] Request: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927351 update_engine[1572]: Jan 14 06:05:42.927534 update_engine[1572]: I20260114 06:05:42.927356 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 06:05:42.934758 update_engine[1572]: I20260114 06:05:42.934691 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 06:05:42.935596 update_engine[1572]: I20260114 06:05:42.935348 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 06:05:42.943361 locksmithd[1630]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 06:05:42.951891 update_engine[1572]: E20260114 06:05:42.951774 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 06:05:42.951891 update_engine[1572]: I20260114 06:05:42.951860 1572 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 06:05:43.560884 kubelet[2756]: E0114 06:05:43.560324 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:05:47.564081 kubelet[2756]: E0114 06:05:47.563973 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:05:49.653762 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:37356.service - OpenSSH per-connection server daemon (10.0.0.1:37356). Jan 14 06:05:49.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.149:22-10.0.0.1:37356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:05:49.656759 kernel: kauditd_printk_skb: 242 callbacks suppressed Jan 14 06:05:49.656840 kernel: audit: type=1130 audit(1768370749.653:746): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.149:22-10.0.0.1:37356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:05:49.790000 audit[5037]: USER_ACCT pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:49.802736 kernel: audit: type=1101 audit(1768370749.790:747): pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:49.794340 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:05:49.803513 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 37356 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:05:49.802924 systemd-logind[1568]: New session 9 of user core. Jan 14 06:05:49.792000 audit[5037]: CRED_ACQ pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:49.814733 kernel: audit: type=1103 audit(1768370749.792:748): pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:49.816900 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 06:05:49.792000 audit[5037]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4b2135f0 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:05:49.834167 kernel: audit: type=1006 audit(1768370749.792:749): pid=5037 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 06:05:49.834340 kernel: audit: type=1300 audit(1768370749.792:749): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4b2135f0 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:05:49.792000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:05:49.822000 audit[5037]: USER_START pid=5037 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:49.852892 kernel: audit: type=1327 audit(1768370749.792:749): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:05:49.852944 kernel: audit: type=1105 audit(1768370749.822:750): pid=5037 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:49.826000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:49.864774 kernel: audit: type=1103 audit(1768370749.826:751): pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:50.016755 sshd[5041]: Connection closed by 10.0.0.1 port 37356 Jan 14 06:05:50.019150 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Jan 14 06:05:50.024000 audit[5037]: USER_END pid=5037 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:50.028720 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:37356.service: Deactivated successfully. Jan 14 06:05:50.033965 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 06:05:50.036756 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Jan 14 06:05:50.039324 systemd-logind[1568]: Removed session 9. Jan 14 06:05:50.024000 audit[5037]: CRED_DISP pid=5037 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:50.052191 kernel: audit: type=1106 audit(1768370750.024:752): pid=5037 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:50.052347 kernel: audit: type=1104 audit(1768370750.024:753): pid=5037 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:50.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.149:22-10.0.0.1:37356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:05:52.560248 kubelet[2756]: E0114 06:05:52.559984 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:05:52.879950 update_engine[1572]: I20260114 06:05:52.879719 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 06:05:52.881219 update_engine[1572]: I20260114 06:05:52.881129 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 06:05:52.881869 update_engine[1572]: I20260114 06:05:52.881776 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 06:05:52.899871 update_engine[1572]: E20260114 06:05:52.899763 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 06:05:52.899965 update_engine[1572]: I20260114 06:05:52.899883 1572 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 06:05:53.561168 kubelet[2756]: E0114 06:05:53.560803 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:05:53.561168 kubelet[2756]: E0114 06:05:53.560860 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:05:54.560701 kubelet[2756]: E0114 06:05:54.560649 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:05:55.038162 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:46730.service - OpenSSH per-connection server daemon (10.0.0.1:46730). Jan 14 06:05:55.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:46730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:05:55.042297 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:05:55.042374 kernel: audit: type=1130 audit(1768370755.037:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:46730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:05:55.115000 audit[5056]: USER_ACCT pid=5056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.117021 sshd[5056]: Accepted publickey for core from 10.0.0.1 port 46730 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:05:55.120950 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:05:55.118000 audit[5056]: CRED_ACQ pid=5056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.134164 systemd-logind[1568]: New session 10 of user core. Jan 14 06:05:55.137665 kernel: audit: type=1101 audit(1768370755.115:756): pid=5056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.137728 kernel: audit: type=1103 audit(1768370755.118:757): pid=5056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.144282 kernel: audit: type=1006 audit(1768370755.118:758): pid=5056 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 06:05:55.118000 audit[5056]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9158d260 a2=3 a3=0 items=0 ppid=1 pid=5056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:05:55.156122 kernel: audit: type=1300 audit(1768370755.118:758): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9158d260 a2=3 a3=0 items=0 ppid=1 pid=5056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:05:55.118000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:05:55.157012 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 06:05:55.161341 kernel: audit: type=1327 audit(1768370755.118:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:05:55.161000 audit[5056]: USER_START pid=5056 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.164000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.186216 kernel: audit: type=1105 audit(1768370755.161:759): pid=5056 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.186863 kernel: audit: type=1103 audit(1768370755.164:760): pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.293518 sshd[5060]: Connection closed by 10.0.0.1 port 46730 Jan 14 06:05:55.295783 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Jan 14 06:05:55.297000 audit[5056]: USER_END pid=5056 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.301994 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:46730.service: Deactivated successfully. Jan 14 06:05:55.305079 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 06:05:55.307137 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Jan 14 06:05:55.308851 systemd-logind[1568]: Removed session 10. Jan 14 06:05:55.297000 audit[5056]: CRED_DISP pid=5056 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.322297 kernel: audit: type=1106 audit(1768370755.297:761): pid=5056 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.322510 kernel: audit: type=1104 audit(1768370755.297:762): pid=5056 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:05:55.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:46730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:05:56.558655 kubelet[2756]: E0114 06:05:56.558503 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:05:58.562385 kubelet[2756]: E0114 06:05:58.561926 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:05:58.562385 kubelet[2756]: E0114 06:05:58.561980 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:06:00.308468 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:46740.service - OpenSSH per-connection server daemon (10.0.0.1:46740). Jan 14 06:06:00.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:46740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:00.312316 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:00.312444 kernel: audit: type=1130 audit(1768370760.307:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:46740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:00.383000 audit[5083]: USER_ACCT pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.385703 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 46740 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:00.388748 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:00.396856 systemd-logind[1568]: New session 11 of user core. Jan 14 06:06:00.385000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.419115 kernel: audit: type=1101 audit(1768370760.383:765): pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.419186 kernel: audit: type=1103 audit(1768370760.385:766): pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.429670 kernel: audit: type=1006 audit(1768370760.385:767): pid=5083 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 06:06:00.385000 audit[5083]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe34157720 a2=3 a3=0 items=0 ppid=1 pid=5083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:00.448111 kernel: audit: type=1300 audit(1768370760.385:767): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe34157720 a2=3 a3=0 items=0 ppid=1 pid=5083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:00.385000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:00.448984 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 06:06:00.452833 kernel: audit: type=1327 audit(1768370760.385:767): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:00.451000 audit[5083]: USER_START pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.466155 kernel: audit: type=1105 audit(1768370760.451:768): pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.453000 audit[5087]: CRED_ACQ pid=5087 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.476754 kernel: audit: type=1103 audit(1768370760.453:769): pid=5087 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.569081 sshd[5087]: Connection closed by 10.0.0.1 port 46740 Jan 14 06:06:00.570869 sshd-session[5083]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:00.571000 audit[5083]: USER_END pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.576325 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:46740.service: Deactivated successfully. Jan 14 06:06:00.579045 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 06:06:00.580709 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Jan 14 06:06:00.582070 systemd-logind[1568]: Removed session 11. Jan 14 06:06:00.571000 audit[5083]: CRED_DISP pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.595649 kernel: audit: type=1106 audit(1768370760.571:770): pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.595704 kernel: audit: type=1104 audit(1768370760.571:771): pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:00.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:46740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:02.879902 update_engine[1572]: I20260114 06:06:02.879711 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 06:06:02.879902 update_engine[1572]: I20260114 06:06:02.879827 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 06:06:02.880385 update_engine[1572]: I20260114 06:06:02.880261 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 06:06:02.895115 update_engine[1572]: E20260114 06:06:02.895021 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 06:06:02.895161 update_engine[1572]: I20260114 06:06:02.895143 1572 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 06:06:05.586169 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Jan 14 06:06:05.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:51858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:05.600358 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:05.600460 kernel: audit: type=1130 audit(1768370765.585:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:51858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:05.680000 audit[5105]: USER_ACCT pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.683271 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:05.684902 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:05.691768 systemd-logind[1568]: New session 12 of user core. Jan 14 06:06:05.694697 kernel: audit: type=1101 audit(1768370765.680:774): pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.694786 kernel: audit: type=1103 audit(1768370765.681:775): pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.681000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.712357 kernel: audit: type=1006 audit(1768370765.681:776): pid=5105 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 06:06:05.712910 kernel: audit: type=1300 audit(1768370765.681:776): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde82c8b0 a2=3 a3=0 items=0 ppid=1 pid=5105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:05.681000 audit[5105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde82c8b0 a2=3 a3=0 items=0 ppid=1 pid=5105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:05.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:05.729147 kernel: audit: type=1327 audit(1768370765.681:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:05.736106 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 06:06:05.739000 audit[5105]: USER_START pid=5105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.741000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.764297 kernel: audit: type=1105 audit(1768370765.739:777): pid=5105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.764529 kernel: audit: type=1103 audit(1768370765.741:778): pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.847341 sshd[5109]: Connection closed by 10.0.0.1 port 51858 Jan 14 06:06:05.847698 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:05.847000 audit[5105]: USER_END pid=5105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.853016 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:51858.service: Deactivated successfully. Jan 14 06:06:05.855778 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 06:06:05.857362 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Jan 14 06:06:05.860240 systemd-logind[1568]: Removed session 12. Jan 14 06:06:05.848000 audit[5105]: CRED_DISP pid=5105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.873867 kernel: audit: type=1106 audit(1768370765.847:779): pid=5105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.873948 kernel: audit: type=1104 audit(1768370765.848:780): pid=5105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:05.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:51858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:06.559319 containerd[1601]: time="2026-01-14T06:06:06.559279196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:06:06.634680 containerd[1601]: time="2026-01-14T06:06:06.634517269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:06.636665 containerd[1601]: time="2026-01-14T06:06:06.636416616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:06:06.636665 containerd[1601]: time="2026-01-14T06:06:06.636520415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:06.637615 kubelet[2756]: E0114 06:06:06.637398 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:06:06.637615 kubelet[2756]: E0114 06:06:06.637474 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:06:06.638652 kubelet[2756]: E0114 06:06:06.638313 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:06.639467 containerd[1601]: time="2026-01-14T06:06:06.639433706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:06:06.707931 containerd[1601]: time="2026-01-14T06:06:06.707707208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:06.709644 containerd[1601]: time="2026-01-14T06:06:06.709429846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:06:06.709644 containerd[1601]: time="2026-01-14T06:06:06.709499970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:06.709967 kubelet[2756]: E0114 06:06:06.709875 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:06:06.709967 kubelet[2756]: E0114 06:06:06.709954 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:06:06.710280 kubelet[2756]: E0114 06:06:06.710241 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c30e3dd0f1294436b07d32775ce4f267,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:06.711393 containerd[1601]: time="2026-01-14T06:06:06.711030003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:06:06.770987 containerd[1601]: time="2026-01-14T06:06:06.770754361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:06.772919 containerd[1601]: time="2026-01-14T06:06:06.772814618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:06:06.773298 containerd[1601]: time="2026-01-14T06:06:06.773172427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:06.774738 kubelet[2756]: E0114 06:06:06.773866 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:06:06.774858 kubelet[2756]: E0114 06:06:06.774833 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:06:06.775408 kubelet[2756]: E0114 06:06:06.775361 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qg59x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-997b9f787-7wfms_calico-system(c77379be-a206-411d-9fc6-5a9725c3295c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:06.775805 containerd[1601]: time="2026-01-14T06:06:06.775785470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:06:06.778057 kubelet[2756]: E0114 06:06:06.777895 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:06:06.861953 containerd[1601]: time="2026-01-14T06:06:06.861794505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:06.863643 containerd[1601]: time="2026-01-14T06:06:06.863476301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:06:06.863887 containerd[1601]: time="2026-01-14T06:06:06.863682608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:06.864178 kubelet[2756]: E0114 06:06:06.863991 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:06:06.864178 kubelet[2756]: E0114 06:06:06.864067 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:06:06.864667 kubelet[2756]: E0114 06:06:06.864394 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp7z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sgth2_calico-system(1fd1f2cb-320b-495b-b1a9-bd981c71562f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:06.865169 containerd[1601]: time="2026-01-14T06:06:06.864758971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:06:06.866167 kubelet[2756]: E0114 06:06:06.865842 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:06:06.935262 containerd[1601]: time="2026-01-14T06:06:06.934990213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:06.938180 containerd[1601]: time="2026-01-14T06:06:06.937921779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:06:06.938180 containerd[1601]: time="2026-01-14T06:06:06.937958819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:06.939527 kubelet[2756]: E0114 06:06:06.938672 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:06:06.939527 kubelet[2756]: E0114 06:06:06.938736 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:06:06.939527 kubelet[2756]: E0114 06:06:06.938915 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6794498458-8d9kg_calico-system(db149086-b13e-4a98-bab8-a1cf713424f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:06.940681 kubelet[2756]: E0114 06:06:06.940461 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:06:08.558806 kubelet[2756]: E0114 06:06:08.558706 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:08.561367 containerd[1601]: time="2026-01-14T06:06:08.561169497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:06:08.633041 containerd[1601]: time="2026-01-14T06:06:08.632968985Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:08.635074 containerd[1601]: time="2026-01-14T06:06:08.634673837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:06:08.635074 containerd[1601]: time="2026-01-14T06:06:08.634776254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:08.635643 kubelet[2756]: E0114 06:06:08.634997 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:06:08.635643 kubelet[2756]: E0114 06:06:08.635043 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:06:08.635643 kubelet[2756]: E0114 06:06:08.635158 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wblzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gbcxx_calico-system(0e777262-7a52-479a-bfac-2fd2fb722412): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:08.636686 kubelet[2756]: E0114 06:06:08.636651 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:06:10.558906 kubelet[2756]: E0114 06:06:10.558795 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:10.560193 containerd[1601]: time="2026-01-14T06:06:10.560118758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:06:10.655887 containerd[1601]: time="2026-01-14T06:06:10.655656137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:10.657397 containerd[1601]: time="2026-01-14T06:06:10.657257321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:06:10.657397 containerd[1601]: time="2026-01-14T06:06:10.657358166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:10.658014 kubelet[2756]: E0114 06:06:10.657843 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:06:10.658084 kubelet[2756]: E0114 06:06:10.658065 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:06:10.658880 kubelet[2756]: E0114 06:06:10.658783 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj2rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-29zzb_calico-apiserver(c28818ff-c451-40e2-8223-e6f03d8b8188): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:10.660030 kubelet[2756]: E0114 06:06:10.659904 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:06:10.865073 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Jan 14 06:06:10.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.149:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:10.868226 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:10.868669 kernel: audit: type=1130 audit(1768370770.864:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.149:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:10.935000 audit[5137]: USER_ACCT pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:10.939131 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:10.939408 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:10.946149 systemd-logind[1568]: New session 13 of user core. Jan 14 06:06:10.937000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:10.958993 kernel: audit: type=1101 audit(1768370770.935:783): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:10.959097 kernel: audit: type=1103 audit(1768370770.937:784): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:10.959128 kernel: audit: type=1006 audit(1768370770.937:785): pid=5137 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 06:06:10.937000 audit[5137]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a1f8970 a2=3 a3=0 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:10.978659 kernel: audit: type=1300 audit(1768370770.937:785): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a1f8970 a2=3 a3=0 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:10.978708 kernel: audit: type=1327 audit(1768370770.937:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:10.937000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:10.983909 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 06:06:10.986000 audit[5137]: USER_START pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:10.989000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.010834 kernel: audit: type=1105 audit(1768370770.986:786): pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.010970 kernel: audit: type=1103 audit(1768370770.989:787): pid=5141 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.078110 sshd[5141]: Connection closed by 10.0.0.1 port 51864 Jan 14 06:06:11.078700 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:11.079000 audit[5137]: USER_END pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.080000 audit[5137]: CRED_DISP pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.103550 kernel: audit: type=1106 audit(1768370771.079:788): pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.103645 kernel: audit: type=1104 audit(1768370771.080:789): pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.108928 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:51864.service: Deactivated successfully. Jan 14 06:06:11.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.149:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:11.112121 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 06:06:11.113680 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Jan 14 06:06:11.118630 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:51866.service - OpenSSH per-connection server daemon (10.0.0.1:51866). Jan 14 06:06:11.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:11.119960 systemd-logind[1568]: Removed session 13. Jan 14 06:06:11.181000 audit[5155]: USER_ACCT pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.182391 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 51866 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:11.183000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.183000 audit[5155]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3b854c10 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:11.183000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:11.185025 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:11.191958 systemd-logind[1568]: New session 14 of user core. Jan 14 06:06:11.200937 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 06:06:11.203000 audit[5155]: USER_START pid=5155 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.206000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.324476 sshd[5160]: Connection closed by 10.0.0.1 port 51866 Jan 14 06:06:11.325810 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:11.326000 audit[5155]: USER_END pid=5155 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.327000 audit[5155]: CRED_DISP pid=5155 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.336801 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:51866.service: Deactivated successfully. Jan 14 06:06:11.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:11.340003 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 06:06:11.342647 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Jan 14 06:06:11.348923 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:51882.service - OpenSSH per-connection server daemon (10.0.0.1:51882). Jan 14 06:06:11.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.149:22-10.0.0.1:51882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:11.352432 systemd-logind[1568]: Removed session 14. Jan 14 06:06:11.431000 audit[5172]: USER_ACCT pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.433000 audit[5172]: CRED_ACQ pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.434086 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 51882 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:11.433000 audit[5172]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc74dda690 a2=3 a3=0 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:11.433000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:11.435919 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:11.443278 systemd-logind[1568]: New session 15 of user core. Jan 14 06:06:11.455805 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 06:06:11.460000 audit[5172]: USER_START pid=5172 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.462000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.545681 sshd[5176]: Connection closed by 10.0.0.1 port 51882 Jan 14 06:06:11.545000 audit[5172]: USER_END pid=5172 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.545000 audit[5172]: CRED_DISP pid=5172 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:11.544088 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:11.549162 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:51882.service: Deactivated successfully. Jan 14 06:06:11.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.149:22-10.0.0.1:51882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:11.551884 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 06:06:11.553418 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Jan 14 06:06:11.555714 systemd-logind[1568]: Removed session 15. Jan 14 06:06:12.878830 update_engine[1572]: I20260114 06:06:12.878700 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 06:06:12.878830 update_engine[1572]: I20260114 06:06:12.878816 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 06:06:12.879322 update_engine[1572]: I20260114 06:06:12.879204 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 06:06:12.893975 update_engine[1572]: E20260114 06:06:12.893905 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 06:06:12.894048 update_engine[1572]: I20260114 06:06:12.894024 1572 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 06:06:12.894167 update_engine[1572]: I20260114 06:06:12.894043 1572 omaha_request_action.cc:617] Omaha request response: Jan 14 06:06:12.894167 update_engine[1572]: E20260114 06:06:12.894137 1572 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 06:06:12.894231 update_engine[1572]: I20260114 06:06:12.894179 1572 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 06:06:12.894231 update_engine[1572]: I20260114 06:06:12.894191 1572 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 06:06:12.894231 update_engine[1572]: I20260114 06:06:12.894199 1572 update_attempter.cc:306] Processing Done. Jan 14 06:06:12.894231 update_engine[1572]: E20260114 06:06:12.894217 1572 update_attempter.cc:619] Update failed. Jan 14 06:06:12.894231 update_engine[1572]: I20260114 06:06:12.894227 1572 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 06:06:12.894333 update_engine[1572]: I20260114 06:06:12.894235 1572 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 06:06:12.894333 update_engine[1572]: I20260114 06:06:12.894244 1572 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 06:06:12.898813 update_engine[1572]: I20260114 06:06:12.898734 1572 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 06:06:12.898813 update_engine[1572]: I20260114 06:06:12.898798 1572 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 06:06:12.898813 update_engine[1572]: I20260114 06:06:12.898809 1572 omaha_request_action.cc:272] Request: Jan 14 06:06:12.898813 update_engine[1572]: Jan 14 06:06:12.898813 update_engine[1572]: Jan 14 06:06:12.898813 update_engine[1572]: Jan 14 06:06:12.898813 update_engine[1572]: Jan 14 06:06:12.898813 update_engine[1572]: Jan 14 06:06:12.898813 update_engine[1572]: Jan 14 06:06:12.899181 update_engine[1572]: I20260114 06:06:12.898818 1572 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 06:06:12.899181 update_engine[1572]: I20260114 06:06:12.898840 1572 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 06:06:12.899181 update_engine[1572]: I20260114 06:06:12.899139 1572 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 06:06:12.900093 locksmithd[1630]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 06:06:12.914283 update_engine[1572]: E20260114 06:06:12.914031 1572 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 06:06:12.914283 update_engine[1572]: I20260114 06:06:12.914139 1572 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 06:06:12.914283 update_engine[1572]: I20260114 06:06:12.914150 1572 omaha_request_action.cc:617] Omaha request response: Jan 14 06:06:12.914283 update_engine[1572]: I20260114 06:06:12.914160 1572 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 06:06:12.914283 update_engine[1572]: I20260114 06:06:12.914167 1572 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 06:06:12.914283 update_engine[1572]: I20260114 06:06:12.914175 1572 update_attempter.cc:306] Processing Done. Jan 14 06:06:12.914283 update_engine[1572]: I20260114 06:06:12.914182 1572 update_attempter.cc:310] Error event sent. Jan 14 06:06:12.917285 update_engine[1572]: I20260114 06:06:12.917115 1572 update_check_scheduler.cc:74] Next update check in 40m9s Jan 14 06:06:12.922018 locksmithd[1630]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 06:06:13.560921 containerd[1601]: time="2026-01-14T06:06:13.560845889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:06:13.628257 containerd[1601]: time="2026-01-14T06:06:13.627973620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:06:13.629840 containerd[1601]: time="2026-01-14T06:06:13.629745973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:06:13.629840 containerd[1601]: time="2026-01-14T06:06:13.629771097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:06:13.630051 kubelet[2756]: E0114 06:06:13.629987 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:06:13.630504 kubelet[2756]: E0114 06:06:13.630064 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:06:13.630504 kubelet[2756]: E0114 06:06:13.630206 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r649q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb4c95c8-lft2v_calico-apiserver(3724f055-c35a-48ef-a153-ecc79aaf3801): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:06:13.632088 kubelet[2756]: E0114 06:06:13.632009 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:06:14.562664 kubelet[2756]: E0114 06:06:14.560955 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:16.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.149:22-10.0.0.1:41866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:16.561266 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:41866.service - OpenSSH per-connection server daemon (10.0.0.1:41866). Jan 14 06:06:16.566555 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 06:06:16.567352 kernel: audit: type=1130 audit(1768370776.560:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.149:22-10.0.0.1:41866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:16.662000 audit[5197]: USER_ACCT pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.664816 sshd[5197]: Accepted publickey for core from 10.0.0.1 port 41866 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:16.667478 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:16.677042 systemd-logind[1568]: New session 16 of user core. Jan 14 06:06:16.679980 kernel: audit: type=1101 audit(1768370776.662:810): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.680828 kernel: audit: type=1103 audit(1768370776.664:811): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.664000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.703324 kernel: audit: type=1006 audit(1768370776.664:812): pid=5197 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 06:06:16.703504 kernel: audit: type=1300 audit(1768370776.664:812): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb6a17630 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:16.664000 audit[5197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb6a17630 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:16.704116 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 06:06:16.664000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:16.727973 kernel: audit: type=1327 audit(1768370776.664:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:16.728039 kernel: audit: type=1105 audit(1768370776.715:813): pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.715000 audit[5197]: USER_START pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.721000 audit[5201]: CRED_ACQ pid=5201 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.759776 kernel: audit: type=1103 audit(1768370776.721:814): pid=5201 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.930854 sshd[5201]: Connection closed by 10.0.0.1 port 41866 Jan 14 06:06:16.931891 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:16.933000 audit[5197]: USER_END pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.940234 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:41866.service: Deactivated successfully. Jan 14 06:06:16.943253 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 06:06:16.946675 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Jan 14 06:06:16.948498 systemd-logind[1568]: Removed session 16. Jan 14 06:06:16.933000 audit[5197]: CRED_DISP pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.969383 kernel: audit: type=1106 audit(1768370776.933:815): pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.969474 kernel: audit: type=1104 audit(1768370776.933:816): pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:16.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.149:22-10.0.0.1:41866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:18.560835 kubelet[2756]: E0114 06:06:18.560523 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:06:19.567021 kubelet[2756]: E0114 06:06:19.566855 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:06:20.559811 kubelet[2756]: E0114 06:06:20.559654 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:06:21.562420 kubelet[2756]: E0114 06:06:21.561977 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:06:21.948020 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:41872.service - OpenSSH per-connection server daemon (10.0.0.1:41872). Jan 14 06:06:21.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.149:22-10.0.0.1:41872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:21.952990 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:21.953133 kernel: audit: type=1130 audit(1768370781.947:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.149:22-10.0.0.1:41872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:22.055000 audit[5241]: USER_ACCT pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.056166 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 41872 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:22.060699 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:22.058000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.074658 systemd-logind[1568]: New session 17 of user core. Jan 14 06:06:22.079008 kernel: audit: type=1101 audit(1768370782.055:819): pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.079160 kernel: audit: type=1103 audit(1768370782.058:820): pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.079201 kernel: audit: type=1006 audit(1768370782.058:821): pid=5241 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 06:06:22.058000 audit[5241]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0d11cd10 a2=3 a3=0 items=0 ppid=1 pid=5241 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:22.098968 kernel: audit: type=1300 audit(1768370782.058:821): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0d11cd10 a2=3 a3=0 items=0 ppid=1 pid=5241 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:22.058000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:22.104139 kernel: audit: type=1327 audit(1768370782.058:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:22.106075 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 06:06:22.110000 audit[5241]: USER_START pid=5241 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.115000 audit[5246]: CRED_ACQ pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.135687 kernel: audit: type=1105 audit(1768370782.110:822): pid=5241 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.135769 kernel: audit: type=1103 audit(1768370782.115:823): pid=5246 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.251440 sshd[5246]: Connection closed by 10.0.0.1 port 41872 Jan 14 06:06:22.251895 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:22.253000 audit[5241]: USER_END pid=5241 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.258247 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:41872.service: Deactivated successfully. Jan 14 06:06:22.262952 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 06:06:22.267501 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Jan 14 06:06:22.253000 audit[5241]: CRED_DISP pid=5241 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.270648 systemd-logind[1568]: Removed session 17. Jan 14 06:06:22.278832 kernel: audit: type=1106 audit(1768370782.253:824): pid=5241 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.278902 kernel: audit: type=1104 audit(1768370782.253:825): pid=5241 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:22.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.149:22-10.0.0.1:41872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:25.561676 kubelet[2756]: E0114 06:06:25.561209 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:06:25.611942 kubelet[2756]: E0114 06:06:25.611859 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:06:27.268042 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:40666.service - OpenSSH per-connection server daemon (10.0.0.1:40666). Jan 14 06:06:27.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:40666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:27.272158 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:27.272206 kernel: audit: type=1130 audit(1768370787.267:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:40666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:27.347000 audit[5259]: USER_ACCT pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.348115 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 40666 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:27.350554 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:27.348000 audit[5259]: CRED_ACQ pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.361940 systemd-logind[1568]: New session 18 of user core. Jan 14 06:06:27.370655 kernel: audit: type=1101 audit(1768370787.347:828): pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.370786 kernel: audit: type=1103 audit(1768370787.348:829): pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.370824 kernel: audit: type=1006 audit(1768370787.348:830): pid=5259 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 06:06:27.372010 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 06:06:27.377705 kernel: audit: type=1300 audit(1768370787.348:830): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa502b370 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:27.348000 audit[5259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa502b370 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:27.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:27.396666 kernel: audit: type=1327 audit(1768370787.348:830): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:27.382000 audit[5259]: USER_START pid=5259 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.413673 kernel: audit: type=1105 audit(1768370787.382:831): pid=5259 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.387000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.425642 kernel: audit: type=1103 audit(1768370787.387:832): pid=5263 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.514900 sshd[5263]: Connection closed by 10.0.0.1 port 40666 Jan 14 06:06:27.515880 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:27.517000 audit[5259]: USER_END pid=5259 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.523680 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:40666.service: Deactivated successfully. Jan 14 06:06:27.526929 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 06:06:27.528408 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Jan 14 06:06:27.530485 systemd-logind[1568]: Removed session 18. Jan 14 06:06:27.517000 audit[5259]: CRED_DISP pid=5259 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.543518 kernel: audit: type=1106 audit(1768370787.517:833): pid=5259 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.544382 kernel: audit: type=1104 audit(1768370787.517:834): pid=5259 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:27.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:40666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:31.560330 kubelet[2756]: E0114 06:06:31.559874 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:06:31.562532 kubelet[2756]: E0114 06:06:31.561863 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:06:32.536375 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:40680.service - OpenSSH per-connection server daemon (10.0.0.1:40680). Jan 14 06:06:32.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:40680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:32.548661 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:32.548776 kernel: audit: type=1130 audit(1768370792.534:836): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:40680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:32.566498 kubelet[2756]: E0114 06:06:32.566368 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:06:32.569492 kubelet[2756]: E0114 06:06:32.569000 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:06:32.631000 audit[5276]: USER_ACCT pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.633945 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 40680 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:32.639117 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:32.635000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.651686 systemd-logind[1568]: New session 19 of user core. Jan 14 06:06:32.661898 kernel: audit: type=1101 audit(1768370792.631:837): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.662016 kernel: audit: type=1103 audit(1768370792.635:838): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.670641 kernel: audit: type=1006 audit(1768370792.635:839): pid=5276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 06:06:32.635000 audit[5276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc63c94a90 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:32.685975 kernel: audit: type=1300 audit(1768370792.635:839): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc63c94a90 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:32.686085 kernel: audit: type=1327 audit(1768370792.635:839): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:32.635000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:32.686955 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 06:06:32.692000 audit[5276]: USER_START pid=5276 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.697000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.715523 kernel: audit: type=1105 audit(1768370792.692:840): pid=5276 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.715682 kernel: audit: type=1103 audit(1768370792.697:841): pid=5280 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.802816 sshd[5280]: Connection closed by 10.0.0.1 port 40680 Jan 14 06:06:32.803366 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:32.805000 audit[5276]: USER_END pid=5276 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.819312 kernel: audit: type=1106 audit(1768370792.805:842): pid=5276 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.823074 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Jan 14 06:06:32.824219 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:40680.service: Deactivated successfully. Jan 14 06:06:32.805000 audit[5276]: CRED_DISP pid=5276 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.831812 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 06:06:32.835648 kernel: audit: type=1104 audit(1768370792.805:843): pid=5276 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:32.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:40680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:32.841108 systemd-logind[1568]: Removed session 19. Jan 14 06:06:35.559191 kubelet[2756]: E0114 06:06:35.558976 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:37.560383 kubelet[2756]: E0114 06:06:37.560226 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:06:37.816889 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:57328.service - OpenSSH per-connection server daemon (10.0.0.1:57328). Jan 14 06:06:37.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:57328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:37.819720 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:37.819793 kernel: audit: type=1130 audit(1768370797.816:845): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:57328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:37.921000 audit[5296]: USER_ACCT pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:37.922657 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 57328 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:37.926528 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:37.934848 systemd-logind[1568]: New session 20 of user core. Jan 14 06:06:37.924000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:37.945513 kernel: audit: type=1101 audit(1768370797.921:846): pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:37.945639 kernel: audit: type=1103 audit(1768370797.924:847): pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:37.945685 kernel: audit: type=1006 audit(1768370797.924:848): pid=5296 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 06:06:37.924000 audit[5296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7937e100 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:37.961690 kernel: audit: type=1300 audit(1768370797.924:848): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7937e100 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:37.961735 kernel: audit: type=1327 audit(1768370797.924:848): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:37.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:37.966956 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 06:06:37.969000 audit[5296]: USER_START pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:37.971000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:37.994094 kernel: audit: type=1105 audit(1768370797.969:849): pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:37.994163 kernel: audit: type=1103 audit(1768370797.971:850): pid=5300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:38.062193 sshd[5300]: Connection closed by 10.0.0.1 port 57328 Jan 14 06:06:38.062527 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:38.063000 audit[5296]: USER_END pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:38.067767 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:57328.service: Deactivated successfully. Jan 14 06:06:38.070811 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 06:06:38.072278 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Jan 14 06:06:38.073961 systemd-logind[1568]: Removed session 20. Jan 14 06:06:38.063000 audit[5296]: CRED_DISP pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:38.084360 kernel: audit: type=1106 audit(1768370798.063:851): pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:38.084526 kernel: audit: type=1104 audit(1768370798.063:852): pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:38.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:57328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:38.559862 kubelet[2756]: E0114 06:06:38.559294 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:06:43.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:57342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:43.084261 systemd[1]: Started sshd@19-10.0.0.149:22-10.0.0.1:57342.service - OpenSSH per-connection server daemon (10.0.0.1:57342). Jan 14 06:06:43.086645 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:43.086680 kernel: audit: type=1130 audit(1768370803.082:854): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:57342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:43.162000 audit[5315]: USER_ACCT pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.164721 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 57342 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:43.166985 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:43.173123 systemd-logind[1568]: New session 21 of user core. Jan 14 06:06:43.164000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.183665 kernel: audit: type=1101 audit(1768370803.162:855): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.183717 kernel: audit: type=1103 audit(1768370803.164:856): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.183836 kernel: audit: type=1006 audit(1768370803.164:857): pid=5315 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 06:06:43.164000 audit[5315]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc88e02cb0 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:43.200228 kernel: audit: type=1300 audit(1768370803.164:857): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc88e02cb0 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:43.200341 kernel: audit: type=1327 audit(1768370803.164:857): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:43.164000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:43.205798 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 06:06:43.208000 audit[5315]: USER_START pid=5315 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.211000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.224723 kernel: audit: type=1105 audit(1768370803.208:858): pid=5315 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.224750 kernel: audit: type=1103 audit(1768370803.211:859): pid=5319 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.312212 sshd[5319]: Connection closed by 10.0.0.1 port 57342 Jan 14 06:06:43.312679 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:43.313000 audit[5315]: USER_END pid=5315 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.313000 audit[5315]: CRED_DISP pid=5315 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.334763 kernel: audit: type=1106 audit(1768370803.313:860): pid=5315 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.334839 kernel: audit: type=1104 audit(1768370803.313:861): pid=5315 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.339668 systemd[1]: sshd@19-10.0.0.149:22-10.0.0.1:57342.service: Deactivated successfully. Jan 14 06:06:43.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:57342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:43.342149 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 06:06:43.343669 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. Jan 14 06:06:43.348439 systemd[1]: Started sshd@20-10.0.0.149:22-10.0.0.1:57348.service - OpenSSH per-connection server daemon (10.0.0.1:57348). Jan 14 06:06:43.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.149:22-10.0.0.1:57348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:43.350135 systemd-logind[1568]: Removed session 21. Jan 14 06:06:43.427000 audit[5332]: USER_ACCT pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.429183 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 57348 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:43.429000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.429000 audit[5332]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff205e8290 a2=3 a3=0 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:43.429000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:43.432227 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:43.438682 systemd-logind[1568]: New session 22 of user core. Jan 14 06:06:43.445773 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 06:06:43.449000 audit[5332]: USER_START pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.451000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.559169 kubelet[2756]: E0114 06:06:43.559076 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:43.798884 sshd[5336]: Connection closed by 10.0.0.1 port 57348 Jan 14 06:06:43.801557 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:43.803000 audit[5332]: USER_END pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.803000 audit[5332]: CRED_DISP pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.810747 systemd[1]: sshd@20-10.0.0.149:22-10.0.0.1:57348.service: Deactivated successfully. Jan 14 06:06:43.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.149:22-10.0.0.1:57348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:43.812959 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 06:06:43.816355 systemd-logind[1568]: Session 22 logged out. Waiting for processes to exit. Jan 14 06:06:43.819464 systemd[1]: Started sshd@21-10.0.0.149:22-10.0.0.1:57358.service - OpenSSH per-connection server daemon (10.0.0.1:57358). Jan 14 06:06:43.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.149:22-10.0.0.1:57358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:43.821437 systemd-logind[1568]: Removed session 22. Jan 14 06:06:43.898000 audit[5348]: USER_ACCT pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.900826 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 57358 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:43.900000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.900000 audit[5348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5ac0b970 a2=3 a3=0 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:43.900000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:43.903693 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:43.910938 systemd-logind[1568]: New session 23 of user core. Jan 14 06:06:43.917806 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 06:06:43.921000 audit[5348]: USER_START pid=5348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:43.924000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.480000 audit[5364]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:06:44.480000 audit[5364]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff82270ae0 a2=0 a3=7fff82270acc items=0 ppid=2918 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:44.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:06:44.493000 audit[5364]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:06:44.493000 audit[5364]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff82270ae0 a2=0 a3=0 items=0 ppid=2918 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:44.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:06:44.497857 sshd[5352]: Connection closed by 10.0.0.1 port 57358 Jan 14 06:06:44.500054 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:44.500000 audit[5348]: USER_END pid=5348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.500000 audit[5348]: CRED_DISP pid=5348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.508472 systemd[1]: sshd@21-10.0.0.149:22-10.0.0.1:57358.service: Deactivated successfully. Jan 14 06:06:44.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.149:22-10.0.0.1:57358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:44.511345 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 06:06:44.516013 systemd-logind[1568]: Session 23 logged out. Waiting for processes to exit. Jan 14 06:06:44.520000 audit[5369]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:06:44.520000 audit[5369]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff7d778690 a2=0 a3=7fff7d77867c items=0 ppid=2918 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:44.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:06:44.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.149:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:44.523236 systemd[1]: Started sshd@22-10.0.0.149:22-10.0.0.1:42902.service - OpenSSH per-connection server daemon (10.0.0.1:42902). Jan 14 06:06:44.525000 audit[5369]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:06:44.525000 audit[5369]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff7d778690 a2=0 a3=0 items=0 ppid=2918 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:44.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:06:44.526251 systemd-logind[1568]: Removed session 23. Jan 14 06:06:44.563064 kubelet[2756]: E0114 06:06:44.562040 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:06:44.566685 kubelet[2756]: E0114 06:06:44.565551 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:06:44.590000 audit[5371]: USER_ACCT pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.593297 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 42902 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:44.592000 audit[5371]: CRED_ACQ pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.592000 audit[5371]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5bbb4dc0 a2=3 a3=0 items=0 ppid=1 pid=5371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:44.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:44.595668 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:44.602066 systemd-logind[1568]: New session 24 of user core. Jan 14 06:06:44.609849 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 06:06:44.612000 audit[5371]: USER_START pid=5371 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.614000 audit[5375]: CRED_ACQ pid=5375 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.818037 sshd[5375]: Connection closed by 10.0.0.1 port 42902 Jan 14 06:06:44.818904 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:44.819000 audit[5371]: USER_END pid=5371 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.819000 audit[5371]: CRED_DISP pid=5371 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.828275 systemd[1]: sshd@22-10.0.0.149:22-10.0.0.1:42902.service: Deactivated successfully. Jan 14 06:06:44.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.149:22-10.0.0.1:42902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:44.830552 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 06:06:44.832248 systemd-logind[1568]: Session 24 logged out. Waiting for processes to exit. Jan 14 06:06:44.834257 systemd-logind[1568]: Removed session 24. Jan 14 06:06:44.838937 systemd[1]: Started sshd@23-10.0.0.149:22-10.0.0.1:42918.service - OpenSSH per-connection server daemon (10.0.0.1:42918). Jan 14 06:06:44.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.149:22-10.0.0.1:42918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:44.903000 audit[5386]: USER_ACCT pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.905740 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 42918 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:44.905000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.905000 audit[5386]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc76c30880 a2=3 a3=0 items=0 ppid=1 pid=5386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:44.905000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:44.908486 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:44.915136 systemd-logind[1568]: New session 25 of user core. Jan 14 06:06:44.928796 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 06:06:44.929000 audit[5386]: USER_START pid=5386 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:44.932000 audit[5390]: CRED_ACQ pid=5390 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:45.013778 sshd[5390]: Connection closed by 10.0.0.1 port 42918 Jan 14 06:06:45.014174 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:45.014000 audit[5386]: USER_END pid=5386 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:45.014000 audit[5386]: CRED_DISP pid=5386 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:45.018823 systemd[1]: sshd@23-10.0.0.149:22-10.0.0.1:42918.service: Deactivated successfully. Jan 14 06:06:45.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.149:22-10.0.0.1:42918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:45.021296 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 06:06:45.024099 systemd-logind[1568]: Session 25 logged out. Waiting for processes to exit. Jan 14 06:06:45.025524 systemd-logind[1568]: Removed session 25. Jan 14 06:06:46.559760 kubelet[2756]: E0114 06:06:46.559544 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:06:46.562419 kubelet[2756]: E0114 06:06:46.562298 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:06:48.560094 kubelet[2756]: E0114 06:06:48.560052 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:06:50.035228 systemd[1]: Started sshd@24-10.0.0.149:22-10.0.0.1:42926.service - OpenSSH per-connection server daemon (10.0.0.1:42926). Jan 14 06:06:50.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.149:22-10.0.0.1:42926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:50.037682 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 06:06:50.037747 kernel: audit: type=1130 audit(1768370810.034:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.149:22-10.0.0.1:42926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:50.125000 audit[5431]: USER_ACCT pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.131788 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:50.137163 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 42926 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:50.129000 audit[5431]: CRED_ACQ pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.138371 systemd-logind[1568]: New session 26 of user core. Jan 14 06:06:50.146274 kernel: audit: type=1101 audit(1768370810.125:904): pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.146343 kernel: audit: type=1103 audit(1768370810.129:905): pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.146370 kernel: audit: type=1006 audit(1768370810.129:906): pid=5431 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 06:06:50.151789 kernel: audit: type=1300 audit(1768370810.129:906): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddc2a29c0 a2=3 a3=0 items=0 ppid=1 pid=5431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:50.129000 audit[5431]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddc2a29c0 a2=3 a3=0 items=0 ppid=1 pid=5431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:50.129000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:50.171100 kernel: audit: type=1327 audit(1768370810.129:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:50.176167 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 06:06:50.180000 audit[5431]: USER_START pid=5431 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.196744 kernel: audit: type=1105 audit(1768370810.180:907): pid=5431 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.196802 kernel: audit: type=1103 audit(1768370810.185:908): pid=5435 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.185000 audit[5435]: CRED_ACQ pid=5435 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.334945 sshd[5435]: Connection closed by 10.0.0.1 port 42926 Jan 14 06:06:50.335944 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:50.338000 audit[5431]: USER_END pid=5431 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.342272 systemd[1]: sshd@24-10.0.0.149:22-10.0.0.1:42926.service: Deactivated successfully. Jan 14 06:06:50.346494 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 06:06:50.348632 systemd-logind[1568]: Session 26 logged out. Waiting for processes to exit. Jan 14 06:06:50.351557 systemd-logind[1568]: Removed session 26. Jan 14 06:06:50.338000 audit[5431]: CRED_DISP pid=5431 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.365281 kernel: audit: type=1106 audit(1768370810.338:909): pid=5431 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.365353 kernel: audit: type=1104 audit(1768370810.338:910): pid=5431 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:50.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.149:22-10.0.0.1:42926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:51.558639 kubelet[2756]: E0114 06:06:51.558528 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:52.558660 kubelet[2756]: E0114 06:06:52.558301 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:52.559137 kubelet[2756]: E0114 06:06:52.559024 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:06:55.351101 systemd[1]: Started sshd@25-10.0.0.149:22-10.0.0.1:54580.service - OpenSSH per-connection server daemon (10.0.0.1:54580). Jan 14 06:06:55.355676 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:06:55.355773 kernel: audit: type=1130 audit(1768370815.350:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.149:22-10.0.0.1:54580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:55.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.149:22-10.0.0.1:54580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:55.428000 audit[5448]: USER_ACCT pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.429818 sshd[5448]: Accepted publickey for core from 10.0.0.1 port 54580 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:06:55.432439 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:06:55.439351 systemd-logind[1568]: New session 27 of user core. Jan 14 06:06:55.430000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.457958 kernel: audit: type=1101 audit(1768370815.428:913): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.458021 kernel: audit: type=1103 audit(1768370815.430:914): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.458070 kernel: audit: type=1006 audit(1768370815.430:915): pid=5448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 06:06:55.466335 kernel: audit: type=1300 audit(1768370815.430:915): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff505e0260 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:55.430000 audit[5448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff505e0260 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:55.430000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:55.480972 kernel: audit: type=1327 audit(1768370815.430:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:06:55.486953 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 06:06:55.489000 audit[5448]: USER_START pid=5448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.493000 audit[5452]: CRED_ACQ pid=5452 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.513942 kernel: audit: type=1105 audit(1768370815.489:916): pid=5448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.514028 kernel: audit: type=1103 audit(1768370815.493:917): pid=5452 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.562060 kubelet[2756]: E0114 06:06:55.561964 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:06:55.579840 sshd[5452]: Connection closed by 10.0.0.1 port 54580 Jan 14 06:06:55.580463 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Jan 14 06:06:55.582000 audit[5448]: USER_END pid=5448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.587344 systemd[1]: sshd@25-10.0.0.149:22-10.0.0.1:54580.service: Deactivated successfully. Jan 14 06:06:55.589892 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 06:06:55.591314 systemd-logind[1568]: Session 27 logged out. Waiting for processes to exit. Jan 14 06:06:55.594083 systemd-logind[1568]: Removed session 27. Jan 14 06:06:55.582000 audit[5448]: CRED_DISP pid=5448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.614007 kernel: audit: type=1106 audit(1768370815.582:918): pid=5448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.614079 kernel: audit: type=1104 audit(1768370815.582:919): pid=5448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:06:55.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.149:22-10.0.0.1:54580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:06:56.691000 audit[5466]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:06:56.691000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf9a9b7d0 a2=0 a3=7ffcf9a9b7bc items=0 ppid=2918 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:56.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:06:56.714000 audit[5466]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:06:56.714000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcf9a9b7d0 a2=0 a3=7ffcf9a9b7bc items=0 ppid=2918 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:06:56.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:06:57.558436 kubelet[2756]: E0114 06:06:57.558403 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 06:06:57.560272 kubelet[2756]: E0114 06:06:57.560227 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8" Jan 14 06:06:58.558919 kubelet[2756]: E0114 06:06:58.558866 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:06:58.559642 kubelet[2756]: E0114 06:06:58.559220 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:07:00.559169 kubelet[2756]: E0114 06:07:00.559072 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-lft2v" podUID="3724f055-c35a-48ef-a153-ecc79aaf3801" Jan 14 06:07:00.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.149:22-10.0.0.1:54582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:00.592865 systemd[1]: Started sshd@26-10.0.0.149:22-10.0.0.1:54582.service - OpenSSH per-connection server daemon (10.0.0.1:54582). Jan 14 06:07:00.602652 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 06:07:00.602693 kernel: audit: type=1130 audit(1768370820.592:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.149:22-10.0.0.1:54582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:00.657000 audit[5470]: USER_ACCT pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.658340 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 54582 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:07:00.660815 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:07:00.667180 systemd-logind[1568]: New session 28 of user core. Jan 14 06:07:00.658000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.677434 kernel: audit: type=1101 audit(1768370820.657:924): pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.677675 kernel: audit: type=1103 audit(1768370820.658:925): pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.677722 kernel: audit: type=1006 audit(1768370820.658:926): pid=5470 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 06:07:00.658000 audit[5470]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe64554ac0 a2=3 a3=0 items=0 ppid=1 pid=5470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:07:00.693718 kernel: audit: type=1300 audit(1768370820.658:926): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe64554ac0 a2=3 a3=0 items=0 ppid=1 pid=5470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:07:00.658000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:07:00.695792 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 06:07:00.697803 kernel: audit: type=1327 audit(1768370820.658:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:07:00.700000 audit[5470]: USER_START pid=5470 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.700000 audit[5474]: CRED_ACQ pid=5474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.726101 kernel: audit: type=1105 audit(1768370820.700:927): pid=5470 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.726160 kernel: audit: type=1103 audit(1768370820.700:928): pid=5474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.783866 sshd[5474]: Connection closed by 10.0.0.1 port 54582 Jan 14 06:07:00.784399 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Jan 14 06:07:00.785000 audit[5470]: USER_END pid=5470 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.789902 systemd[1]: sshd@26-10.0.0.149:22-10.0.0.1:54582.service: Deactivated successfully. Jan 14 06:07:00.792329 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 06:07:00.795469 systemd-logind[1568]: Session 28 logged out. Waiting for processes to exit. Jan 14 06:07:00.797826 systemd-logind[1568]: Removed session 28. Jan 14 06:07:00.799713 kernel: audit: type=1106 audit(1768370820.785:929): pid=5470 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.800017 kernel: audit: type=1104 audit(1768370820.786:930): pid=5470 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.786000 audit[5470]: CRED_DISP pid=5470 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:00.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.149:22-10.0.0.1:54582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:05.800359 systemd[1]: Started sshd@27-10.0.0.149:22-10.0.0.1:36298.service - OpenSSH per-connection server daemon (10.0.0.1:36298). Jan 14 06:07:05.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.149:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:05.804494 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:07:05.804657 kernel: audit: type=1130 audit(1768370825.799:932): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.149:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:05.876000 audit[5490]: USER_ACCT pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:05.877178 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 36298 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:07:05.879836 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:07:05.886132 systemd-logind[1568]: New session 29 of user core. Jan 14 06:07:05.877000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:05.896192 kernel: audit: type=1101 audit(1768370825.876:933): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:05.896231 kernel: audit: type=1103 audit(1768370825.877:934): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:05.896331 kernel: audit: type=1006 audit(1768370825.877:935): pid=5490 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 06:07:05.877000 audit[5490]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff084e1840 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:07:05.913051 kernel: audit: type=1300 audit(1768370825.877:935): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff084e1840 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:07:05.913096 kernel: audit: type=1327 audit(1768370825.877:935): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:07:05.877000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:07:05.918826 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 06:07:05.921000 audit[5490]: USER_START pid=5490 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:05.935679 kernel: audit: type=1105 audit(1768370825.921:936): pid=5490 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:05.935736 kernel: audit: type=1103 audit(1768370825.923:937): pid=5494 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:05.923000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:06.019685 sshd[5494]: Connection closed by 10.0.0.1 port 36298 Jan 14 06:07:06.019986 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Jan 14 06:07:06.021000 audit[5490]: USER_END pid=5490 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:06.021000 audit[5490]: CRED_DISP pid=5490 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:06.037289 systemd[1]: sshd@27-10.0.0.149:22-10.0.0.1:36298.service: Deactivated successfully. Jan 14 06:07:06.040236 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 06:07:06.041468 systemd-logind[1568]: Session 29 logged out. Waiting for processes to exit. Jan 14 06:07:06.043457 systemd-logind[1568]: Removed session 29. Jan 14 06:07:06.044848 kernel: audit: type=1106 audit(1768370826.021:938): pid=5490 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:06.045016 kernel: audit: type=1104 audit(1768370826.021:939): pid=5490 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:06.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.149:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:06.559287 kubelet[2756]: E0114 06:07:06.559169 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb4c95c8-29zzb" podUID="c28818ff-c451-40e2-8223-e6f03d8b8188" Jan 14 06:07:08.558927 kubelet[2756]: E0114 06:07:08.558816 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-997b9f787-7wfms" podUID="c77379be-a206-411d-9fc6-5a9725c3295c" Jan 14 06:07:11.032840 systemd[1]: Started sshd@28-10.0.0.149:22-10.0.0.1:36312.service - OpenSSH per-connection server daemon (10.0.0.1:36312). Jan 14 06:07:11.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.149:22-10.0.0.1:36312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:11.036048 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:07:11.036194 kernel: audit: type=1130 audit(1768370831.031:941): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.149:22-10.0.0.1:36312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:11.111505 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 36312 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 06:07:11.109000 audit[5508]: USER_ACCT pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.114254 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:07:11.111000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.133202 kernel: audit: type=1101 audit(1768370831.109:942): pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.133242 kernel: audit: type=1103 audit(1768370831.111:943): pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.136751 systemd-logind[1568]: New session 30 of user core. Jan 14 06:07:11.139765 kernel: audit: type=1006 audit(1768370831.111:944): pid=5508 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 06:07:11.139819 kernel: audit: type=1300 audit(1768370831.111:944): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5d5b59a0 a2=3 a3=0 items=0 ppid=1 pid=5508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:07:11.111000 audit[5508]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5d5b59a0 a2=3 a3=0 items=0 ppid=1 pid=5508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:07:11.111000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:07:11.155936 kernel: audit: type=1327 audit(1768370831.111:944): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:07:11.159803 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 06:07:11.162000 audit[5508]: USER_START pid=5508 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.166000 audit[5513]: CRED_ACQ pid=5513 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.184972 kernel: audit: type=1105 audit(1768370831.162:945): pid=5508 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.185085 kernel: audit: type=1103 audit(1768370831.166:946): pid=5513 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.254965 sshd[5513]: Connection closed by 10.0.0.1 port 36312 Jan 14 06:07:11.255868 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Jan 14 06:07:11.256000 audit[5508]: USER_END pid=5508 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.262839 systemd[1]: sshd@28-10.0.0.149:22-10.0.0.1:36312.service: Deactivated successfully. Jan 14 06:07:11.265997 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 06:07:11.267717 systemd-logind[1568]: Session 30 logged out. Waiting for processes to exit. Jan 14 06:07:11.269674 kernel: audit: type=1106 audit(1768370831.256:947): pid=5508 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.269725 kernel: audit: type=1104 audit(1768370831.257:948): pid=5508 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.257000 audit[5508]: CRED_DISP pid=5508 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 06:07:11.270377 systemd-logind[1568]: Removed session 30. Jan 14 06:07:11.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.149:22-10.0.0.1:36312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:07:11.560373 kubelet[2756]: E0114 06:07:11.560197 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sgth2" podUID="1fd1f2cb-320b-495b-b1a9-bd981c71562f" Jan 14 06:07:11.560373 kubelet[2756]: E0114 06:07:11.559833 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gbcxx" podUID="0e777262-7a52-479a-bfac-2fd2fb722412" Jan 14 06:07:12.560087 kubelet[2756]: E0114 06:07:12.559919 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6794498458-8d9kg" podUID="db149086-b13e-4a98-bab8-a1cf713424f8"