Jan 22 01:01:50.851326 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 21 22:02:49 -00 2026 Jan 22 01:01:50.851416 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 01:01:50.851493 kernel: BIOS-provided physical RAM map: Jan 22 01:01:50.851504 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 22 01:01:50.851515 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 22 01:01:50.851526 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 22 01:01:50.851536 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 22 01:01:50.851546 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 22 01:01:50.852427 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 22 01:01:50.852445 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 22 01:01:50.852514 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 22 01:01:50.852525 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 22 01:01:50.852534 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 22 01:01:50.852543 kernel: NX (Execute Disable) protection: active Jan 22 01:01:50.852556 kernel: APIC: Static calls initialized Jan 22 01:01:50.852775 kernel: SMBIOS 2.8 present. Jan 22 01:01:50.852836 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 22 01:01:50.852847 kernel: DMI: Memory slots populated: 1/1 Jan 22 01:01:50.852856 kernel: Hypervisor detected: KVM Jan 22 01:01:50.852867 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 22 01:01:50.852879 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 22 01:01:50.852890 kernel: kvm-clock: using sched offset of 12535531683 cycles Jan 22 01:01:50.852900 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 22 01:01:50.852912 kernel: tsc: Detected 2445.426 MHz processor Jan 22 01:01:50.852974 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 22 01:01:50.852987 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 22 01:01:50.853000 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 22 01:01:50.853011 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 22 01:01:50.853022 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 22 01:01:50.853032 kernel: Using GB pages for direct mapping Jan 22 01:01:50.853044 kernel: ACPI: Early table checksum verification disabled Jan 22 01:01:50.853457 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 22 01:01:50.853473 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 01:01:50.853485 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 01:01:50.853495 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 01:01:50.853507 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 22 01:01:50.853518 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 01:01:50.853530 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 01:01:50.853724 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 01:01:50.853738 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 01:01:50.853810 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 22 01:01:50.853824 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 22 01:01:50.853834 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 22 01:01:50.853902 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 22 01:01:50.853915 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 22 01:01:50.853927 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 22 01:01:50.853940 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 22 01:01:50.853952 kernel: No NUMA configuration found Jan 22 01:01:50.853962 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 22 01:01:50.853974 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 22 01:01:50.854040 kernel: Zone ranges: Jan 22 01:01:50.854052 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 22 01:01:50.854066 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 22 01:01:50.854076 kernel: Normal empty Jan 22 01:01:50.854419 kernel: Device empty Jan 22 01:01:50.854438 kernel: Movable zone start for each node Jan 22 01:01:50.854450 kernel: Early memory node ranges Jan 22 01:01:50.854513 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 22 01:01:50.854525 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 22 01:01:50.854537 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 22 01:01:50.854551 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 22 01:01:50.854708 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 22 01:01:50.854770 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 22 01:01:50.854783 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 22 01:01:50.854796 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 22 01:01:50.854868 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 22 01:01:50.854879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 22 01:01:50.854933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 22 01:01:50.854948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 22 01:01:50.854959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 22 01:01:50.854970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 22 01:01:50.854981 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 22 01:01:50.855050 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 22 01:01:50.855060 kernel: TSC deadline timer available Jan 22 01:01:50.855072 kernel: CPU topo: Max. logical packages: 1 Jan 22 01:01:50.855082 kernel: CPU topo: Max. logical dies: 1 Jan 22 01:01:50.855474 kernel: CPU topo: Max. dies per package: 1 Jan 22 01:01:50.855486 kernel: CPU topo: Max. threads per core: 1 Jan 22 01:01:50.855499 kernel: CPU topo: Num. cores per package: 4 Jan 22 01:01:50.855767 kernel: CPU topo: Num. threads per package: 4 Jan 22 01:01:50.855782 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 22 01:01:50.855793 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 22 01:01:50.855805 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 22 01:01:50.855816 kernel: kvm-guest: setup PV sched yield Jan 22 01:01:50.855828 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 22 01:01:50.855840 kernel: Booting paravirtualized kernel on KVM Jan 22 01:01:50.855853 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 22 01:01:50.855929 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 22 01:01:50.855942 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 22 01:01:50.855954 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 22 01:01:50.855965 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 22 01:01:50.855977 kernel: kvm-guest: PV spinlocks enabled Jan 22 01:01:50.855988 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 22 01:01:50.856001 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 01:01:50.856065 kernel: random: crng init done Jan 22 01:01:50.856077 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 22 01:01:50.856461 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 22 01:01:50.856481 kernel: Fallback order for Node 0: 0 Jan 22 01:01:50.856495 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 22 01:01:50.856507 kernel: Policy zone: DMA32 Jan 22 01:01:50.856726 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 22 01:01:50.856740 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 22 01:01:50.856753 kernel: ftrace: allocating 40097 entries in 157 pages Jan 22 01:01:50.856765 kernel: ftrace: allocated 157 pages with 5 groups Jan 22 01:01:50.856776 kernel: Dynamic Preempt: voluntary Jan 22 01:01:50.856789 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 22 01:01:50.856809 kernel: rcu: RCU event tracing is enabled. Jan 22 01:01:50.856821 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 22 01:01:50.856894 kernel: Trampoline variant of Tasks RCU enabled. Jan 22 01:01:50.856946 kernel: Rude variant of Tasks RCU enabled. Jan 22 01:01:50.856959 kernel: Tracing variant of Tasks RCU enabled. Jan 22 01:01:50.856969 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 22 01:01:50.856980 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 22 01:01:50.857031 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 22 01:01:50.857043 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 22 01:01:50.857412 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 22 01:01:50.857428 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 22 01:01:50.857440 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 22 01:01:50.857708 kernel: Console: colour VGA+ 80x25 Jan 22 01:01:50.858844 kernel: printk: legacy console [ttyS0] enabled Jan 22 01:01:50.858858 kernel: ACPI: Core revision 20240827 Jan 22 01:01:50.858870 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 22 01:01:50.858882 kernel: APIC: Switch to symmetric I/O mode setup Jan 22 01:01:50.858893 kernel: x2apic enabled Jan 22 01:01:50.858906 kernel: APIC: Switched APIC routing to: physical x2apic Jan 22 01:01:50.859024 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 22 01:01:50.859038 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 22 01:01:50.859052 kernel: kvm-guest: setup PV IPIs Jan 22 01:01:50.859332 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 22 01:01:50.859348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 22 01:01:50.859361 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 22 01:01:50.859374 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 22 01:01:50.859387 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 22 01:01:50.859400 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 22 01:01:50.859413 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 22 01:01:50.860500 kernel: Spectre V2 : Mitigation: Retpolines Jan 22 01:01:50.860515 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 22 01:01:50.860527 kernel: Speculative Store Bypass: Vulnerable Jan 22 01:01:50.860539 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 22 01:01:50.860552 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 22 01:01:50.860712 kernel: active return thunk: srso_alias_return_thunk Jan 22 01:01:50.860725 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 22 01:01:50.860798 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 22 01:01:50.860810 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 22 01:01:50.860822 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 22 01:01:50.860834 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 22 01:01:50.860845 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 22 01:01:50.860857 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 22 01:01:50.860868 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 22 01:01:50.860922 kernel: Freeing SMP alternatives memory: 32K Jan 22 01:01:50.860934 kernel: pid_max: default: 32768 minimum: 301 Jan 22 01:01:50.860945 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 22 01:01:50.860957 kernel: landlock: Up and running. Jan 22 01:01:50.860968 kernel: SELinux: Initializing. Jan 22 01:01:50.860980 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 22 01:01:50.860991 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 22 01:01:50.861071 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 22 01:01:50.861083 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 22 01:01:50.861364 kernel: signal: max sigframe size: 1776 Jan 22 01:01:50.861376 kernel: rcu: Hierarchical SRCU implementation. Jan 22 01:01:50.861388 kernel: rcu: Max phase no-delay instances is 400. Jan 22 01:01:50.861399 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 22 01:01:50.861411 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 22 01:01:50.861469 kernel: smp: Bringing up secondary CPUs ... Jan 22 01:01:50.861481 kernel: smpboot: x86: Booting SMP configuration: Jan 22 01:01:50.861493 kernel: .... node #0, CPUs: #1 #2 #3 Jan 22 01:01:50.861505 kernel: smp: Brought up 1 node, 4 CPUs Jan 22 01:01:50.861517 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 22 01:01:50.861529 kernel: Memory: 2447344K/2571752K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 118472K reserved, 0K cma-reserved) Jan 22 01:01:50.861541 kernel: devtmpfs: initialized Jan 22 01:01:50.861718 kernel: x86/mm: Memory block size: 128MB Jan 22 01:01:50.861732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 22 01:01:50.861744 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 22 01:01:50.861756 kernel: pinctrl core: initialized pinctrl subsystem Jan 22 01:01:50.861767 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 22 01:01:50.861779 kernel: audit: initializing netlink subsys (disabled) Jan 22 01:01:50.861790 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 22 01:01:50.861852 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 22 01:01:50.861864 kernel: audit: type=2000 audit(1769043690.803:1): state=initialized audit_enabled=0 res=1 Jan 22 01:01:50.861875 kernel: cpuidle: using governor menu Jan 22 01:01:50.861887 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 22 01:01:50.861899 kernel: dca service started, version 1.12.1 Jan 22 01:01:50.861910 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 22 01:01:50.861922 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 22 01:01:50.861933 kernel: PCI: Using configuration type 1 for base access Jan 22 01:01:50.861986 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 22 01:01:50.861998 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 22 01:01:50.862010 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 22 01:01:50.862021 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 22 01:01:50.862033 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 22 01:01:50.862044 kernel: ACPI: Added _OSI(Module Device) Jan 22 01:01:50.862056 kernel: ACPI: Added _OSI(Processor Device) Jan 22 01:01:50.862735 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 22 01:01:50.862748 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 22 01:01:50.862762 kernel: ACPI: Interpreter enabled Jan 22 01:01:50.862773 kernel: ACPI: PM: (supports S0 S3 S5) Jan 22 01:01:50.862787 kernel: ACPI: Using IOAPIC for interrupt routing Jan 22 01:01:50.862799 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 22 01:01:50.862812 kernel: PCI: Using E820 reservations for host bridge windows Jan 22 01:01:50.862890 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 22 01:01:50.862904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 22 01:01:50.863807 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 22 01:01:50.864460 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 22 01:01:50.864933 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 22 01:01:50.864954 kernel: PCI host bridge to bus 0000:00 Jan 22 01:01:50.865829 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 22 01:01:50.866476 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 22 01:01:50.866937 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 22 01:01:50.867736 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 22 01:01:50.868017 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 22 01:01:50.868859 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 22 01:01:50.869496 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 22 01:01:50.870044 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 22 01:01:50.870361 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 22 01:01:50.870830 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 22 01:01:50.871135 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 22 01:01:50.871500 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 22 01:01:50.871946 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 22 01:01:50.872235 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 15625 usecs Jan 22 01:01:50.872546 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 22 01:01:50.876371 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 22 01:01:50.876933 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 22 01:01:50.877220 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 22 01:01:50.877530 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 22 01:01:50.877993 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 22 01:01:50.878292 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 22 01:01:50.879291 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 22 01:01:50.880193 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 22 01:01:50.880469 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 22 01:01:50.881166 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 22 01:01:50.881459 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 22 01:01:50.881911 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 22 01:01:50.882431 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 22 01:01:50.883383 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 22 01:01:50.883836 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Jan 22 01:01:50.884426 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 22 01:01:50.884871 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 22 01:01:50.885145 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 22 01:01:50.885894 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 22 01:01:50.886178 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 22 01:01:50.886196 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 22 01:01:50.886211 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 22 01:01:50.886222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 22 01:01:50.886233 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 22 01:01:50.886244 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 22 01:01:50.886323 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 22 01:01:50.886335 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 22 01:01:50.886348 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 22 01:01:50.886359 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 22 01:01:50.886370 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 22 01:01:50.886381 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 22 01:01:50.886391 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 22 01:01:50.886453 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 22 01:01:50.886464 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 22 01:01:50.886479 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 22 01:01:50.886491 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 22 01:01:50.886502 kernel: iommu: Default domain type: Translated Jan 22 01:01:50.886513 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 22 01:01:50.886524 kernel: PCI: Using ACPI for IRQ routing Jan 22 01:01:50.886737 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 22 01:01:50.886751 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 22 01:01:50.886765 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 22 01:01:50.887063 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 22 01:01:50.887361 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 22 01:01:50.888114 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 22 01:01:50.888132 kernel: vgaarb: loaded Jan 22 01:01:50.888216 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 22 01:01:50.888232 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 22 01:01:50.888246 kernel: clocksource: Switched to clocksource kvm-clock Jan 22 01:01:50.888257 kernel: VFS: Disk quotas dquot_6.6.0 Jan 22 01:01:50.888268 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 22 01:01:50.888280 kernel: pnp: PnP ACPI init Jan 22 01:01:50.889186 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 22 01:01:50.889276 kernel: pnp: PnP ACPI: found 6 devices Jan 22 01:01:50.889289 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 22 01:01:50.889301 kernel: NET: Registered PF_INET protocol family Jan 22 01:01:50.889313 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 22 01:01:50.889326 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 22 01:01:50.889339 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 22 01:01:50.889411 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 22 01:01:50.889425 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 22 01:01:50.889437 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 22 01:01:50.889450 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 22 01:01:50.889463 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 22 01:01:50.889477 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 22 01:01:50.889488 kernel: NET: Registered PF_XDP protocol family Jan 22 01:01:50.890152 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 22 01:01:50.890408 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 22 01:01:50.890804 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 22 01:01:50.891300 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 22 01:01:50.891548 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 22 01:01:50.891936 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 22 01:01:50.891954 kernel: PCI: CLS 0 bytes, default 64 Jan 22 01:01:50.892242 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 22 01:01:50.892258 kernel: Initialise system trusted keyrings Jan 22 01:01:50.892270 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 22 01:01:50.892284 kernel: Key type asymmetric registered Jan 22 01:01:50.892295 kernel: Asymmetric key parser 'x509' registered Jan 22 01:01:50.892306 kernel: hrtimer: interrupt took 14369421 ns Jan 22 01:01:50.892317 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 22 01:01:50.892394 kernel: io scheduler mq-deadline registered Jan 22 01:01:50.892405 kernel: io scheduler kyber registered Jan 22 01:01:50.892419 kernel: io scheduler bfq registered Jan 22 01:01:50.892432 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 22 01:01:50.892445 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 22 01:01:50.892456 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 22 01:01:50.892467 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 22 01:01:50.892540 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 22 01:01:50.892552 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 22 01:01:50.892656 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 22 01:01:50.892725 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 22 01:01:50.892738 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 22 01:01:50.893532 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 22 01:01:50.893559 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 22 01:01:50.894074 kernel: rtc_cmos 00:04: registered as rtc0 Jan 22 01:01:50.894780 kernel: rtc_cmos 00:04: setting system clock to 2026-01-22T01:01:39 UTC (1769043699) Jan 22 01:01:50.895073 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 22 01:01:50.895096 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 22 01:01:50.895109 kernel: NET: Registered PF_INET6 protocol family Jan 22 01:01:50.895120 kernel: Segment Routing with IPv6 Jan 22 01:01:50.895131 kernel: In-situ OAM (IOAM) with IPv6 Jan 22 01:01:50.895211 kernel: NET: Registered PF_PACKET protocol family Jan 22 01:01:50.895225 kernel: Key type dns_resolver registered Jan 22 01:01:50.895236 kernel: IPI shorthand broadcast: enabled Jan 22 01:01:50.895247 kernel: sched_clock: Marking stable (8029055351, 1285818055)->(10186723044, -871849638) Jan 22 01:01:50.895259 kernel: registered taskstats version 1 Jan 22 01:01:50.895271 kernel: Loading compiled-in X.509 certificates Jan 22 01:01:50.895282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3c3e07c08e874e2a4bf964a0051bfd3618f8b847' Jan 22 01:01:50.895713 kernel: Demotion targets for Node 0: null Jan 22 01:01:50.895731 kernel: Key type .fscrypt registered Jan 22 01:01:50.895742 kernel: Key type fscrypt-provisioning registered Jan 22 01:01:50.895755 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 22 01:01:50.895770 kernel: ima: Allocated hash algorithm: sha1 Jan 22 01:01:50.895781 kernel: ima: No architecture policies found Jan 22 01:01:50.895792 kernel: clk: Disabling unused clocks Jan 22 01:01:50.895880 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 22 01:01:50.895893 kernel: Write protecting the kernel read-only data: 45056k Jan 22 01:01:50.895908 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 22 01:01:50.895921 kernel: Run /init as init process Jan 22 01:01:50.895933 kernel: with arguments: Jan 22 01:01:50.895948 kernel: /init Jan 22 01:01:50.895959 kernel: with environment: Jan 22 01:01:50.896039 kernel: HOME=/ Jan 22 01:01:50.896052 kernel: TERM=linux Jan 22 01:01:50.896065 kernel: SCSI subsystem initialized Jan 22 01:01:50.896079 kernel: libata version 3.00 loaded. Jan 22 01:01:50.896386 kernel: ahci 0000:00:1f.2: version 3.0 Jan 22 01:01:50.896407 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 22 01:01:50.897017 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 22 01:01:50.897380 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 22 01:01:50.898155 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 22 01:01:50.899340 kernel: scsi host0: ahci Jan 22 01:01:50.900764 kernel: scsi host1: ahci Jan 22 01:01:50.901138 kernel: scsi host2: ahci Jan 22 01:01:50.902148 kernel: scsi host3: ahci Jan 22 01:01:50.902887 kernel: scsi host4: ahci Jan 22 01:01:50.903190 kernel: scsi host5: ahci Jan 22 01:01:50.903212 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 22 01:01:50.903224 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 22 01:01:50.903235 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 22 01:01:50.903354 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 22 01:01:50.903367 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 22 01:01:50.903378 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 22 01:01:50.903390 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 22 01:01:50.903403 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 22 01:01:50.903984 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 22 01:01:50.903999 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 22 01:01:50.904074 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 22 01:01:50.904087 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 22 01:01:50.904101 kernel: ata3.00: LPM support broken, forcing max_power Jan 22 01:01:50.904112 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 22 01:01:50.904123 kernel: ata3.00: applying bridge limits Jan 22 01:01:50.904134 kernel: ata3.00: LPM support broken, forcing max_power Jan 22 01:01:50.904147 kernel: ata3.00: configured for UDMA/100 Jan 22 01:01:50.904539 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 22 01:01:50.905481 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 22 01:01:50.906171 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 22 01:01:50.906192 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 22 01:01:50.906509 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 22 01:01:50.906528 kernel: GPT:16515071 != 27000831 Jan 22 01:01:50.906740 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 22 01:01:50.906755 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 22 01:01:50.906766 kernel: GPT:16515071 != 27000831 Jan 22 01:01:50.906777 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 22 01:01:50.906789 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 22 01:01:50.907376 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 22 01:01:50.907398 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 22 01:01:50.907488 kernel: device-mapper: uevent: version 1.0.3 Jan 22 01:01:50.907709 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 22 01:01:50.907727 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 22 01:01:50.907741 kernel: raid6: avx2x4 gen() 17112 MB/s Jan 22 01:01:50.907755 kernel: raid6: avx2x2 gen() 10587 MB/s Jan 22 01:01:50.907766 kernel: raid6: avx2x1 gen() 10353 MB/s Jan 22 01:01:50.907777 kernel: raid6: using algorithm avx2x4 gen() 17112 MB/s Jan 22 01:01:50.907859 kernel: raid6: .... xor() 2657 MB/s, rmw enabled Jan 22 01:01:50.907871 kernel: raid6: using avx2x2 recovery algorithm Jan 22 01:01:50.907882 kernel: xor: automatically using best checksumming function avx Jan 22 01:01:50.907958 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 22 01:01:50.908034 kernel: BTRFS: device fsid 79986906-7858-40a3-90f5-bda7e594a44c devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 22 01:01:50.908052 kernel: BTRFS info (device dm-0): first mount of filesystem 79986906-7858-40a3-90f5-bda7e594a44c Jan 22 01:01:50.908068 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 22 01:01:50.908080 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 22 01:01:50.908091 kernel: BTRFS info (device dm-0): enabling free space tree Jan 22 01:01:50.908102 kernel: loop: module loaded Jan 22 01:01:50.908114 kernel: loop0: detected capacity change from 0 to 100160 Jan 22 01:01:50.908409 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 22 01:01:50.908429 systemd[1]: Successfully made /usr/ read-only. Jan 22 01:01:50.908445 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 01:01:50.908457 systemd[1]: Detected virtualization kvm. Jan 22 01:01:50.908469 systemd[1]: Detected architecture x86-64. Jan 22 01:01:50.908481 systemd[1]: Running in initrd. Jan 22 01:01:50.908712 systemd[1]: No hostname configured, using default hostname. Jan 22 01:01:50.908865 systemd[1]: Hostname set to . Jan 22 01:01:50.908885 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 22 01:01:50.908901 systemd[1]: Queued start job for default target initrd.target. Jan 22 01:01:50.908913 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 01:01:50.908925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 01:01:50.908938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 01:01:50.909023 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 22 01:01:50.909036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 01:01:50.909050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 22 01:01:50.909066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 22 01:01:50.909080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 01:01:50.909160 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 01:01:50.909174 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 22 01:01:50.909187 systemd[1]: Reached target paths.target - Path Units. Jan 22 01:01:50.909201 systemd[1]: Reached target slices.target - Slice Units. Jan 22 01:01:50.909215 systemd[1]: Reached target swap.target - Swaps. Jan 22 01:01:50.909473 systemd[1]: Reached target timers.target - Timer Units. Jan 22 01:01:50.909495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 01:01:50.909718 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 01:01:50.909735 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 01:01:50.909750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 22 01:01:50.909762 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 22 01:01:50.909774 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 01:01:50.909790 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 01:01:50.909803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 01:01:50.909886 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 01:01:50.909902 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 22 01:01:50.909917 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 22 01:01:50.909932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 01:01:50.909947 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 22 01:01:50.909963 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 22 01:01:50.910043 systemd[1]: Starting systemd-fsck-usr.service... Jan 22 01:01:50.910056 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 01:01:50.910068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 01:01:50.910080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 01:01:50.910162 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 22 01:01:50.910175 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 01:01:50.910187 systemd[1]: Finished systemd-fsck-usr.service. Jan 22 01:01:50.910202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 01:01:50.910260 systemd-journald[321]: Collecting audit messages is enabled. Jan 22 01:01:50.910727 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 01:01:50.910743 systemd-journald[321]: Journal started Jan 22 01:01:50.910815 systemd-journald[321]: Runtime Journal (/run/log/journal/816e14960cac4294a43ee4e18cf7f557) is 6M, max 48.2M, 42.2M free. Jan 22 01:01:50.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:50.946980 kernel: audit: type=1130 audit(1769043710.922:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:50.947053 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 01:01:50.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:50.973977 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 01:01:51.429919 kernel: audit: type=1130 audit(1769043710.952:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.430044 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 22 01:01:51.430066 kernel: Bridge firewalling registered Jan 22 01:01:50.990352 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 01:01:51.061101 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 22 01:01:51.426910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 01:01:51.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.506197 kernel: audit: type=1130 audit(1769043711.491:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.509818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 01:01:51.551823 kernel: audit: type=1130 audit(1769043711.520:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.544134 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 22 01:01:51.559781 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 22 01:01:51.611423 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 01:01:51.670767 kernel: audit: type=1130 audit(1769043711.627:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.631171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 01:01:51.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.694234 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 01:01:51.738115 kernel: audit: type=1130 audit(1769043711.693:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.740184 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 01:01:51.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.795797 kernel: audit: type=1130 audit(1769043711.772:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.796377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 01:01:51.848790 kernel: audit: type=1130 audit(1769043711.812:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:51.829170 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 22 01:01:51.888310 kernel: audit: type=1334 audit(1769043711.851:10): prog-id=6 op=LOAD Jan 22 01:01:51.851000 audit: BPF prog-id=6 op=LOAD Jan 22 01:01:51.856894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 01:01:51.976421 dracut-cmdline[359]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 01:01:52.131303 systemd-resolved[360]: Positive Trust Anchors: Jan 22 01:01:52.132649 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 01:01:52.132656 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 01:01:52.132756 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 01:01:52.330156 systemd-resolved[360]: Defaulting to hostname 'linux'. Jan 22 01:01:52.354206 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 01:01:52.402248 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 01:01:52.457004 kernel: audit: type=1130 audit(1769043712.401:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:52.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:52.725051 kernel: Loading iSCSI transport class v2.0-870. Jan 22 01:01:52.802469 kernel: iscsi: registered transport (tcp) Jan 22 01:01:52.873371 kernel: iscsi: registered transport (qla4xxx) Jan 22 01:01:52.873463 kernel: QLogic iSCSI HBA Driver Jan 22 01:01:53.006985 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 01:01:53.099232 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 01:01:53.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:53.114441 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 01:01:53.386469 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 22 01:01:53.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:53.419324 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 22 01:01:53.462164 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 22 01:01:53.645246 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 22 01:01:53.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:53.646000 audit: BPF prog-id=7 op=LOAD Jan 22 01:01:53.646000 audit: BPF prog-id=8 op=LOAD Jan 22 01:01:53.649888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 01:01:53.771472 systemd-udevd[595]: Using default interface naming scheme 'v257'. Jan 22 01:01:53.833085 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 01:01:53.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:53.870889 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 22 01:01:53.958850 dracut-pre-trigger[651]: rd.md=0: removing MD RAID activation Jan 22 01:01:54.070790 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 01:01:54.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:54.106312 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 01:01:54.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:54.115000 audit: BPF prog-id=9 op=LOAD Jan 22 01:01:54.122820 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 01:01:54.166245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 01:01:54.373813 systemd-networkd[731]: lo: Link UP Jan 22 01:01:54.373829 systemd-networkd[731]: lo: Gained carrier Jan 22 01:01:54.388157 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 01:01:54.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:54.408101 systemd[1]: Reached target network.target - Network. Jan 22 01:01:54.482278 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 01:01:54.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:54.528251 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 22 01:01:54.669050 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 22 01:01:54.712968 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 22 01:01:54.758818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 22 01:01:54.773388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 22 01:01:54.796077 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 22 01:01:54.835761 kernel: cryptd: max_cpu_qlen set to 1000 Jan 22 01:01:54.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:54.806837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 01:01:54.807249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 01:01:54.816111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 01:01:54.845196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 01:01:54.883771 disk-uuid[774]: Primary Header is updated. Jan 22 01:01:54.883771 disk-uuid[774]: Secondary Entries is updated. Jan 22 01:01:54.883771 disk-uuid[774]: Secondary Header is updated. Jan 22 01:01:54.989358 kernel: AES CTR mode by8 optimization enabled Jan 22 01:01:54.999752 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 22 01:01:55.094173 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 01:01:55.094271 systemd-networkd[731]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 01:01:55.099497 systemd-networkd[731]: eth0: Link UP Jan 22 01:01:55.100068 systemd-networkd[731]: eth0: Gained carrier Jan 22 01:01:55.100089 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 01:01:55.133816 systemd-networkd[731]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 22 01:01:55.199472 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 22 01:01:55.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:55.642030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 01:01:55.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:55.678072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 01:01:55.707012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 01:01:55.752529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 01:01:55.785104 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 22 01:01:55.919543 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 22 01:01:55.990085 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 22 01:01:55.990698 kernel: audit: type=1130 audit(1769043715.938:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:55.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:55.996557 disk-uuid[776]: Warning: The kernel is still using the old partition table. Jan 22 01:01:55.996557 disk-uuid[776]: The new table will be used at the next reboot or after you Jan 22 01:01:55.996557 disk-uuid[776]: run partprobe(8) or kpartx(8) Jan 22 01:01:55.996557 disk-uuid[776]: The operation has completed successfully. Jan 22 01:01:56.046498 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 22 01:01:56.109949 kernel: audit: type=1130 audit(1769043716.052:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:56.109996 kernel: audit: type=1131 audit(1769043716.052:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:56.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:56.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:56.048523 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 22 01:01:56.079088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 22 01:01:56.229679 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Jan 22 01:01:56.244779 kernel: BTRFS info (device vda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 01:01:56.244858 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 01:01:56.286778 kernel: BTRFS info (device vda6): turning on async discard Jan 22 01:01:56.286863 kernel: BTRFS info (device vda6): enabling free space tree Jan 22 01:01:56.320876 kernel: BTRFS info (device vda6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 01:01:56.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:56.344373 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 22 01:01:56.392436 kernel: audit: type=1130 audit(1769043716.352:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:56.355836 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 22 01:01:56.715347 systemd-networkd[731]: eth0: Gained IPv6LL Jan 22 01:01:56.782143 ignition[886]: Ignition 2.22.0 Jan 22 01:01:56.783191 ignition[886]: Stage: fetch-offline Jan 22 01:01:56.785110 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 22 01:01:56.785131 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 01:01:56.785290 ignition[886]: parsed url from cmdline: "" Jan 22 01:01:56.785297 ignition[886]: no config URL provided Jan 22 01:01:56.785305 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 22 01:01:56.785330 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 22 01:01:56.785403 ignition[886]: op(1): [started] loading QEMU firmware config module Jan 22 01:01:56.785413 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 22 01:01:56.882479 ignition[886]: op(1): [finished] loading QEMU firmware config module Jan 22 01:01:56.882542 ignition[886]: QEMU firmware config was not found. Ignoring... Jan 22 01:01:57.489285 ignition[886]: parsing config with SHA512: 71a59ae34206afa9241400fa730beb82cbb5846a0005ff2e79096848d964b87607e8f32f4650c33da0fec401a9f129dc06e3073e80374f81642c2616f26881b4 Jan 22 01:01:57.512841 unknown[886]: fetched base config from "system" Jan 22 01:01:57.512861 unknown[886]: fetched user config from "qemu" Jan 22 01:01:57.523492 ignition[886]: fetch-offline: fetch-offline passed Jan 22 01:01:57.523712 ignition[886]: Ignition finished successfully Jan 22 01:01:57.546422 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 01:01:57.580907 kernel: audit: type=1130 audit(1769043717.547:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:57.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:57.552188 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 22 01:01:57.572939 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 22 01:01:57.697457 ignition[897]: Ignition 2.22.0 Jan 22 01:01:57.700481 ignition[897]: Stage: kargs Jan 22 01:01:57.700884 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 22 01:01:57.750957 kernel: audit: type=1130 audit(1769043717.718:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:57.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:57.715345 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 22 01:01:57.700907 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 01:01:57.722522 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 22 01:01:57.702262 ignition[897]: kargs: kargs passed Jan 22 01:01:57.702332 ignition[897]: Ignition finished successfully Jan 22 01:01:57.913127 ignition[906]: Ignition 2.22.0 Jan 22 01:01:57.913206 ignition[906]: Stage: disks Jan 22 01:01:57.913434 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 22 01:01:57.913451 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 01:01:57.957265 ignition[906]: disks: disks passed Jan 22 01:01:57.957419 ignition[906]: Ignition finished successfully Jan 22 01:01:57.975414 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 22 01:01:57.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:58.024237 kernel: audit: type=1130 audit(1769043717.994:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:57.997200 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 22 01:01:58.025107 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 22 01:01:58.035125 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 01:01:58.052234 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 01:01:58.052435 systemd[1]: Reached target basic.target - Basic System. Jan 22 01:01:58.129117 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 22 01:01:58.306344 systemd-fsck[916]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 22 01:01:58.323000 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 22 01:01:58.365432 kernel: audit: type=1130 audit(1769043718.334:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:58.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:58.344218 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 22 01:01:58.973886 kernel: EXT4-fs (vda9): mounted filesystem 2fa3c08b-a48e-45e5-aeb3-7441bca9cf30 r/w with ordered data mode. Quota mode: none. Jan 22 01:01:58.978553 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 22 01:01:58.993377 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 22 01:01:59.006050 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 01:01:59.052177 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 22 01:01:59.053408 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 22 01:01:59.112450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Jan 22 01:01:59.112485 kernel: BTRFS info (device vda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 01:01:59.113126 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 01:01:59.053480 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 22 01:01:59.053523 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 01:01:59.157349 kernel: BTRFS info (device vda6): turning on async discard Jan 22 01:01:59.157453 kernel: BTRFS info (device vda6): enabling free space tree Jan 22 01:01:59.166712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 01:01:59.201335 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 22 01:01:59.218877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 22 01:01:59.453390 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 22 01:01:59.475953 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 22 01:01:59.495302 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 22 01:01:59.525812 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 22 01:01:59.897420 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 22 01:01:59.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:59.931272 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 22 01:01:59.972198 kernel: audit: type=1130 audit(1769043719.926:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:01:59.980339 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 22 01:02:00.011961 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 22 01:02:00.039517 kernel: BTRFS info (device vda6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 01:02:00.144204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 22 01:02:00.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:00.181231 kernel: audit: type=1130 audit(1769043720.165:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:00.191718 ignition[1038]: INFO : Ignition 2.22.0 Jan 22 01:02:00.191718 ignition[1038]: INFO : Stage: mount Jan 22 01:02:00.207071 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 01:02:00.207071 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 01:02:00.207071 ignition[1038]: INFO : mount: mount passed Jan 22 01:02:00.207071 ignition[1038]: INFO : Ignition finished successfully Jan 22 01:02:00.229359 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 22 01:02:00.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:00.248264 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 22 01:02:00.306335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 01:02:00.361186 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Jan 22 01:02:00.378415 kernel: BTRFS info (device vda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 01:02:00.378490 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 01:02:00.418480 kernel: BTRFS info (device vda6): turning on async discard Jan 22 01:02:00.418855 kernel: BTRFS info (device vda6): enabling free space tree Jan 22 01:02:00.422720 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 01:02:00.514342 ignition[1067]: INFO : Ignition 2.22.0 Jan 22 01:02:00.514342 ignition[1067]: INFO : Stage: files Jan 22 01:02:00.525432 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 01:02:00.525432 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 01:02:00.546629 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jan 22 01:02:00.567296 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 22 01:02:00.567296 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 22 01:02:00.602336 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 22 01:02:00.614261 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 22 01:02:00.623835 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 22 01:02:00.622132 unknown[1067]: wrote ssh authorized keys file for user: core Jan 22 01:02:00.642122 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 22 01:02:00.642122 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 22 01:02:00.721257 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 22 01:02:00.837168 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 22 01:02:00.837168 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 22 01:02:00.886210 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 22 01:02:01.280163 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 22 01:02:02.522650 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 22 01:02:02.522650 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 22 01:02:02.561695 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 22 01:02:02.715707 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 22 01:02:02.738076 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 22 01:02:02.738076 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 22 01:02:02.738076 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 22 01:02:02.738076 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 22 01:02:02.796145 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 22 01:02:02.796145 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 22 01:02:02.796145 ignition[1067]: INFO : files: files passed Jan 22 01:02:02.796145 ignition[1067]: INFO : Ignition finished successfully Jan 22 01:02:02.891079 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 01:02:02.891206 kernel: audit: type=1130 audit(1769043722.795:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:02.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:02.786513 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 22 01:02:02.803904 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 22 01:02:02.907523 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 22 01:02:02.929189 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 22 01:02:02.937948 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 22 01:02:02.982128 kernel: audit: type=1130 audit(1769043722.944:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:02.982171 kernel: audit: type=1131 audit(1769043722.944:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:02.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:02.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:02.989722 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Jan 22 01:02:03.006474 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 01:02:03.006474 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 22 01:02:03.027416 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 01:02:03.038492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 01:02:03.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.056924 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 22 01:02:03.076144 kernel: audit: type=1130 audit(1769043723.053:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.094917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 22 01:02:03.280014 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 22 01:02:03.291128 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 22 01:02:03.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.311535 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 22 01:02:03.339355 kernel: audit: type=1130 audit(1769043723.303:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.339399 kernel: audit: type=1131 audit(1769043723.310:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.351843 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 22 01:02:03.372154 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 22 01:02:03.377874 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 22 01:02:03.523695 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 01:02:03.559465 kernel: audit: type=1130 audit(1769043723.534:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.540530 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 22 01:02:03.606359 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 01:02:03.606944 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 22 01:02:03.639328 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 01:02:03.739082 kernel: audit: type=1131 audit(1769043723.707:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:03.657889 systemd[1]: Stopped target timers.target - Timer Units. Jan 22 01:02:03.679400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 22 01:02:03.681846 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 01:02:03.776442 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 22 01:02:03.785153 systemd[1]: Stopped target basic.target - Basic System. Jan 22 01:02:03.806155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 22 01:02:03.821174 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 01:02:03.842331 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 22 01:02:03.876246 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 22 01:02:03.906330 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 22 01:02:03.915945 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 01:02:03.950368 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 22 01:02:03.950889 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 22 01:02:03.986232 systemd[1]: Stopped target swap.target - Swaps. Jan 22 01:02:03.996506 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 22 01:02:04.000095 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 22 01:02:04.101254 kernel: audit: type=1131 audit(1769043724.036:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.038110 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 22 01:02:04.070086 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 01:02:04.176078 kernel: audit: type=1131 audit(1769043724.127:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.089059 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 22 01:02:04.091909 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 01:02:04.113772 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 22 01:02:04.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.114110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 22 01:02:04.175993 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 22 01:02:04.177996 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 01:02:04.219987 systemd[1]: Stopped target paths.target - Path Units. Jan 22 01:02:04.246290 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 22 01:02:04.250107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 01:02:04.270194 systemd[1]: Stopped target slices.target - Slice Units. Jan 22 01:02:04.297510 systemd[1]: Stopped target sockets.target - Socket Units. Jan 22 01:02:04.305736 systemd[1]: iscsid.socket: Deactivated successfully. Jan 22 01:02:04.308118 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 01:02:04.326530 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 22 01:02:04.329485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 01:02:04.379436 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 22 01:02:04.379558 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 22 01:02:04.396456 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 22 01:02:04.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.402546 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 01:02:04.493529 systemd[1]: ignition-files.service: Deactivated successfully. Jan 22 01:02:04.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.495049 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 22 01:02:04.532752 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 22 01:02:04.548036 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 22 01:02:04.577068 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 22 01:02:04.583676 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 01:02:04.606074 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 22 01:02:04.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.606279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 01:02:04.626474 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 22 01:02:04.626746 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 01:02:04.723259 ignition[1124]: INFO : Ignition 2.22.0 Jan 22 01:02:04.723259 ignition[1124]: INFO : Stage: umount Jan 22 01:02:04.723259 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 01:02:04.723259 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 01:02:04.723259 ignition[1124]: INFO : umount: umount passed Jan 22 01:02:04.723259 ignition[1124]: INFO : Ignition finished successfully Jan 22 01:02:04.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.729029 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 22 01:02:04.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.731505 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 22 01:02:04.732024 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 22 01:02:04.775369 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 22 01:02:04.918000 audit: BPF prog-id=6 op=UNLOAD Jan 22 01:02:04.775683 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 22 01:02:04.795161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 22 01:02:04.795672 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 22 01:02:04.813445 systemd[1]: Stopped target network.target - Network. Jan 22 01:02:04.815453 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 22 01:02:04.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.815553 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 22 01:02:04.819059 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 22 01:02:04.819121 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 22 01:02:05.006000 audit: BPF prog-id=9 op=UNLOAD Jan 22 01:02:04.819286 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 22 01:02:04.819347 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 22 01:02:04.819511 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 22 01:02:05.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.820759 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 22 01:02:05.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.822911 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 22 01:02:05.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.822989 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 22 01:02:04.830112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 22 01:02:04.836941 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 22 01:02:05.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.871400 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 22 01:02:05.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.872219 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 22 01:02:04.924419 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 22 01:02:05.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.925325 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 22 01:02:05.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.977455 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 22 01:02:04.987369 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 22 01:02:04.987484 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 22 01:02:05.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:05.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:05.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:04.995976 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 22 01:02:05.018424 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 22 01:02:05.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:05.018555 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 01:02:05.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:05.026347 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 22 01:02:05.026444 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 22 01:02:05.058314 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 22 01:02:05.058441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 22 01:02:05.078929 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 01:02:05.118772 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 22 01:02:05.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:05.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:05.119204 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 01:02:05.139158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 22 01:02:05.139271 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 22 01:02:05.146716 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 22 01:02:05.146856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 01:02:05.151479 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 22 01:02:05.151550 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 22 01:02:05.161744 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 22 01:02:05.161977 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 22 01:02:05.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:05.173234 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 22 01:02:05.173338 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 01:02:05.186063 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 22 01:02:05.189935 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 22 01:02:05.190036 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 01:02:05.237199 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 22 01:02:05.237327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 01:02:05.250217 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 22 01:02:05.250317 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 01:02:05.278234 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 22 01:02:05.278353 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 01:02:05.317896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 01:02:05.318010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 01:02:05.367379 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 22 01:02:05.367744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 22 01:02:05.525091 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 22 01:02:05.525406 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 22 01:02:05.540276 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 22 01:02:05.577171 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 22 01:02:05.765034 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Jan 22 01:02:05.651365 systemd[1]: Switching root. Jan 22 01:02:05.775239 systemd-journald[321]: Journal stopped Jan 22 01:02:10.052808 kernel: SELinux: policy capability network_peer_controls=1 Jan 22 01:02:10.052976 kernel: SELinux: policy capability open_perms=1 Jan 22 01:02:10.053018 kernel: SELinux: policy capability extended_socket_class=1 Jan 22 01:02:10.053039 kernel: SELinux: policy capability always_check_network=0 Jan 22 01:02:10.053062 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 22 01:02:10.053081 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 22 01:02:10.053104 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 22 01:02:10.053183 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 22 01:02:10.053204 kernel: SELinux: policy capability userspace_initial_context=0 Jan 22 01:02:10.053225 systemd[1]: Successfully loaded SELinux policy in 162.891ms. Jan 22 01:02:10.053263 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.597ms. Jan 22 01:02:10.053285 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 01:02:10.053304 systemd[1]: Detected virtualization kvm. Jan 22 01:02:10.053322 systemd[1]: Detected architecture x86-64. Jan 22 01:02:10.053342 systemd[1]: Detected first boot. Jan 22 01:02:10.053433 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 22 01:02:10.053453 zram_generator::config[1169]: No configuration found. Jan 22 01:02:10.053476 kernel: Guest personality initialized and is inactive Jan 22 01:02:10.053496 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 22 01:02:10.053520 kernel: Initialized host personality Jan 22 01:02:10.053539 kernel: NET: Registered PF_VSOCK protocol family Jan 22 01:02:10.053689 systemd[1]: Populated /etc with preset unit settings. Jan 22 01:02:10.053713 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 22 01:02:10.053737 kernel: audit: type=1334 audit(1769043728.070:87): prog-id=12 op=LOAD Jan 22 01:02:10.053755 kernel: audit: type=1334 audit(1769043728.070:88): prog-id=3 op=UNLOAD Jan 22 01:02:10.053775 kernel: audit: type=1334 audit(1769043728.070:89): prog-id=13 op=LOAD Jan 22 01:02:10.053794 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 22 01:02:10.053813 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 22 01:02:10.053911 kernel: audit: type=1334 audit(1769043728.070:90): prog-id=14 op=LOAD Jan 22 01:02:10.053933 kernel: audit: type=1334 audit(1769043728.070:91): prog-id=4 op=UNLOAD Jan 22 01:02:10.053951 kernel: audit: type=1334 audit(1769043728.070:92): prog-id=5 op=UNLOAD Jan 22 01:02:10.053970 kernel: audit: type=1131 audit(1769043728.079:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.053987 kernel: audit: type=1334 audit(1769043728.145:94): prog-id=12 op=UNLOAD Jan 22 01:02:10.054005 kernel: audit: type=1130 audit(1769043728.201:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.054024 kernel: audit: type=1131 audit(1769043728.201:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.054108 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 22 01:02:10.054134 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 22 01:02:10.054155 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 22 01:02:10.054174 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 22 01:02:10.054193 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 22 01:02:10.054212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 22 01:02:10.054288 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 22 01:02:10.054310 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 22 01:02:10.054332 systemd[1]: Created slice user.slice - User and Session Slice. Jan 22 01:02:10.054352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 01:02:10.054374 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 01:02:10.054399 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 22 01:02:10.054486 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 22 01:02:10.054510 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 22 01:02:10.054529 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 01:02:10.054550 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 22 01:02:10.054704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 01:02:10.054730 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 01:02:10.054748 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 22 01:02:10.054897 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 22 01:02:10.054925 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 22 01:02:10.054947 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 22 01:02:10.054966 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 01:02:10.054988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 01:02:10.055008 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 22 01:02:10.055026 systemd[1]: Reached target slices.target - Slice Units. Jan 22 01:02:10.055113 systemd[1]: Reached target swap.target - Swaps. Jan 22 01:02:10.055137 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 22 01:02:10.055155 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 22 01:02:10.055177 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 22 01:02:10.055195 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 01:02:10.055216 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 22 01:02:10.055239 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 01:02:10.055258 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 22 01:02:10.055348 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 22 01:02:10.055371 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 01:02:10.055388 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 01:02:10.055409 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 22 01:02:10.055430 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 22 01:02:10.055448 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 22 01:02:10.055469 systemd[1]: Mounting media.mount - External Media Directory... Jan 22 01:02:10.055555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 01:02:10.055690 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 22 01:02:10.055710 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 22 01:02:10.055733 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 22 01:02:10.055754 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 22 01:02:10.055772 systemd[1]: Reached target machines.target - Containers. Jan 22 01:02:10.055919 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 22 01:02:10.055944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 01:02:10.055962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 01:02:10.055983 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 22 01:02:10.056005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 01:02:10.056023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 01:02:10.056047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 01:02:10.056137 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 22 01:02:10.056159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 01:02:10.056181 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 22 01:02:10.056201 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 22 01:02:10.056220 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 22 01:02:10.056242 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 22 01:02:10.056261 systemd[1]: Stopped systemd-fsck-usr.service. Jan 22 01:02:10.056348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 01:02:10.056371 kernel: ACPI: bus type drm_connector registered Jan 22 01:02:10.056388 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 01:02:10.056474 kernel: fuse: init (API version 7.41) Jan 22 01:02:10.056497 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 01:02:10.056515 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 01:02:10.056536 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 22 01:02:10.056558 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 22 01:02:10.056718 systemd-journald[1255]: Collecting audit messages is enabled. Jan 22 01:02:10.056902 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 01:02:10.056929 systemd-journald[1255]: Journal started Jan 22 01:02:10.056963 systemd-journald[1255]: Runtime Journal (/run/log/journal/816e14960cac4294a43ee4e18cf7f557) is 6M, max 48.2M, 42.2M free. Jan 22 01:02:09.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:09.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:09.870000 audit: BPF prog-id=14 op=UNLOAD Jan 22 01:02:09.870000 audit: BPF prog-id=13 op=UNLOAD Jan 22 01:02:09.878000 audit: BPF prog-id=15 op=LOAD Jan 22 01:02:09.888000 audit: BPF prog-id=16 op=LOAD Jan 22 01:02:09.892000 audit: BPF prog-id=17 op=LOAD Jan 22 01:02:10.044000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 22 01:02:10.044000 audit[1255]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd2562c180 a2=4000 a3=0 items=0 ppid=1 pid=1255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:10.044000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 22 01:02:08.032486 systemd[1]: Queued start job for default target multi-user.target. Jan 22 01:02:08.073753 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 22 01:02:08.079196 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 22 01:02:08.080480 systemd[1]: systemd-journald.service: Consumed 1.974s CPU time. Jan 22 01:02:10.067709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 01:02:10.088692 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 01:02:10.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.104670 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 22 01:02:10.113246 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 22 01:02:10.122299 systemd[1]: Mounted media.mount - External Media Directory. Jan 22 01:02:10.131421 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 22 01:02:10.139419 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 22 01:02:10.147982 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 22 01:02:10.154289 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 22 01:02:10.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.161397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 01:02:10.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.172908 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 22 01:02:10.173354 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 22 01:02:10.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.182541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 01:02:10.184170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 01:02:10.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.195891 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 01:02:10.196276 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 01:02:10.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.208109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 01:02:10.208488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 01:02:10.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.221069 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 22 01:02:10.221510 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 22 01:02:10.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.232511 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 01:02:10.236238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 01:02:10.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.248767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 01:02:10.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.257706 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 01:02:10.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.271725 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 22 01:02:10.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.289157 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 22 01:02:10.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.307209 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 01:02:10.340957 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 01:02:10.359109 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 22 01:02:10.378414 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 22 01:02:10.390694 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 22 01:02:10.401800 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 22 01:02:10.401936 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 01:02:10.405698 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 22 01:02:10.421782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 01:02:10.422086 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 01:02:10.428205 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 22 01:02:10.442937 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 22 01:02:10.458234 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 01:02:10.463488 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 22 01:02:10.480278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 01:02:10.500713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 01:02:10.516820 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 22 01:02:10.518725 systemd-journald[1255]: Time spent on flushing to /var/log/journal/816e14960cac4294a43ee4e18cf7f557 is 89.172ms for 1123 entries. Jan 22 01:02:10.518725 systemd-journald[1255]: System Journal (/var/log/journal/816e14960cac4294a43ee4e18cf7f557) is 8M, max 163.5M, 155.5M free. Jan 22 01:02:10.656374 systemd-journald[1255]: Received client request to flush runtime journal. Jan 22 01:02:10.656432 kernel: loop1: detected capacity change from 0 to 111544 Jan 22 01:02:10.551745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 01:02:10.579408 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 22 01:02:10.629185 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 22 01:02:10.657067 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 22 01:02:10.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.675802 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 22 01:02:10.697267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 01:02:10.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.717057 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 22 01:02:10.738926 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 22 01:02:10.758971 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 22 01:02:10.759001 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 22 01:02:10.774333 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 01:02:10.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.799204 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 22 01:02:10.816265 kernel: loop2: detected capacity change from 0 to 229808 Jan 22 01:02:10.872005 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 22 01:02:10.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:10.939059 kernel: loop3: detected capacity change from 0 to 119256 Jan 22 01:02:10.984424 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 22 01:02:10.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:11.004000 audit: BPF prog-id=18 op=LOAD Jan 22 01:02:11.005000 audit: BPF prog-id=19 op=LOAD Jan 22 01:02:11.005000 audit: BPF prog-id=20 op=LOAD Jan 22 01:02:11.008265 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 22 01:02:11.025000 audit: BPF prog-id=21 op=LOAD Jan 22 01:02:11.031757 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 01:02:11.049916 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 01:02:11.070000 audit: BPF prog-id=22 op=LOAD Jan 22 01:02:11.070000 audit: BPF prog-id=23 op=LOAD Jan 22 01:02:11.070000 audit: BPF prog-id=24 op=LOAD Jan 22 01:02:11.073055 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 22 01:02:11.092000 audit: BPF prog-id=25 op=LOAD Jan 22 01:02:11.092000 audit: BPF prog-id=26 op=LOAD Jan 22 01:02:11.092000 audit: BPF prog-id=27 op=LOAD Jan 22 01:02:11.097054 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 22 01:02:11.104728 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 22 01:02:11.129700 kernel: loop4: detected capacity change from 0 to 111544 Jan 22 01:02:11.138195 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 22 01:02:11.139004 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 22 01:02:11.160255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 01:02:11.180104 kernel: loop5: detected capacity change from 0 to 229808 Jan 22 01:02:11.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:11.237704 kernel: loop6: detected capacity change from 0 to 119256 Jan 22 01:02:11.246494 systemd-nsresourced[1315]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 22 01:02:11.249243 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 22 01:02:11.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:11.288972 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 22 01:02:11.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:11.299546 (sd-merge)[1318]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 22 01:02:11.308266 (sd-merge)[1318]: Merged extensions into '/usr'. Jan 22 01:02:11.317683 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Jan 22 01:02:11.317711 systemd[1]: Reloading... Jan 22 01:02:11.457784 zram_generator::config[1364]: No configuration found. Jan 22 01:02:11.499170 systemd-oomd[1311]: No swap; memory pressure usage will be degraded Jan 22 01:02:11.517287 systemd-resolved[1312]: Positive Trust Anchors: Jan 22 01:02:11.517307 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 01:02:11.517315 systemd-resolved[1312]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 01:02:11.517360 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 01:02:11.527752 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jan 22 01:02:11.918067 systemd[1]: Reloading finished in 599 ms. Jan 22 01:02:11.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:11.966487 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 22 01:02:11.980799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 01:02:11.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:11.996389 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 22 01:02:12.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:12.023092 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 22 01:02:12.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:12.057282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 01:02:12.088526 systemd[1]: Starting ensure-sysext.service... Jan 22 01:02:12.100240 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 01:02:12.109000 audit: BPF prog-id=8 op=UNLOAD Jan 22 01:02:12.109000 audit: BPF prog-id=7 op=UNLOAD Jan 22 01:02:12.111000 audit: BPF prog-id=28 op=LOAD Jan 22 01:02:12.122000 audit: BPF prog-id=29 op=LOAD Jan 22 01:02:12.124425 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 01:02:12.148000 audit: BPF prog-id=30 op=LOAD Jan 22 01:02:12.148000 audit: BPF prog-id=22 op=UNLOAD Jan 22 01:02:12.148000 audit: BPF prog-id=31 op=LOAD Jan 22 01:02:12.148000 audit: BPF prog-id=32 op=LOAD Jan 22 01:02:12.148000 audit: BPF prog-id=23 op=UNLOAD Jan 22 01:02:12.148000 audit: BPF prog-id=24 op=UNLOAD Jan 22 01:02:12.150000 audit: BPF prog-id=33 op=LOAD Jan 22 01:02:12.150000 audit: BPF prog-id=25 op=UNLOAD Jan 22 01:02:12.150000 audit: BPF prog-id=34 op=LOAD Jan 22 01:02:12.150000 audit: BPF prog-id=35 op=LOAD Jan 22 01:02:12.150000 audit: BPF prog-id=26 op=UNLOAD Jan 22 01:02:12.150000 audit: BPF prog-id=27 op=UNLOAD Jan 22 01:02:12.154000 audit: BPF prog-id=36 op=LOAD Jan 22 01:02:12.155000 audit: BPF prog-id=15 op=UNLOAD Jan 22 01:02:12.155000 audit: BPF prog-id=37 op=LOAD Jan 22 01:02:12.155000 audit: BPF prog-id=38 op=LOAD Jan 22 01:02:12.155000 audit: BPF prog-id=16 op=UNLOAD Jan 22 01:02:12.155000 audit: BPF prog-id=17 op=UNLOAD Jan 22 01:02:12.163000 audit: BPF prog-id=39 op=LOAD Jan 22 01:02:12.171000 audit: BPF prog-id=18 op=UNLOAD Jan 22 01:02:12.171000 audit: BPF prog-id=40 op=LOAD Jan 22 01:02:12.171000 audit: BPF prog-id=41 op=LOAD Jan 22 01:02:12.171000 audit: BPF prog-id=19 op=UNLOAD Jan 22 01:02:12.171000 audit: BPF prog-id=20 op=UNLOAD Jan 22 01:02:12.171000 audit: BPF prog-id=42 op=LOAD Jan 22 01:02:12.171000 audit: BPF prog-id=21 op=UNLOAD Jan 22 01:02:12.181003 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 22 01:02:12.181055 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 22 01:02:12.181527 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 22 01:02:12.184014 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 22 01:02:12.184247 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 22 01:02:12.188752 systemd[1]: Reload requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... Jan 22 01:02:12.188816 systemd[1]: Reloading... Jan 22 01:02:12.213188 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 01:02:12.213211 systemd-tmpfiles[1399]: Skipping /boot Jan 22 01:02:12.248428 systemd-udevd[1400]: Using default interface naming scheme 'v257'. Jan 22 01:02:12.249946 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 01:02:12.249962 systemd-tmpfiles[1399]: Skipping /boot Jan 22 01:02:12.343705 zram_generator::config[1432]: No configuration found. Jan 22 01:02:12.595707 kernel: mousedev: PS/2 mouse device common for all mice Jan 22 01:02:12.595797 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 22 01:02:12.625962 kernel: ACPI: button: Power Button [PWRF] Jan 22 01:02:12.674805 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 22 01:02:12.686839 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 22 01:02:12.838071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 22 01:02:12.848919 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 22 01:02:12.849046 systemd[1]: Reloading finished in 659 ms. Jan 22 01:02:12.865239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 01:02:12.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:12.884000 audit: BPF prog-id=43 op=LOAD Jan 22 01:02:12.884000 audit: BPF prog-id=42 op=UNLOAD Jan 22 01:02:12.885000 audit: BPF prog-id=44 op=LOAD Jan 22 01:02:12.885000 audit: BPF prog-id=36 op=UNLOAD Jan 22 01:02:12.886000 audit: BPF prog-id=45 op=LOAD Jan 22 01:02:12.886000 audit: BPF prog-id=46 op=LOAD Jan 22 01:02:12.886000 audit: BPF prog-id=37 op=UNLOAD Jan 22 01:02:12.886000 audit: BPF prog-id=38 op=UNLOAD Jan 22 01:02:12.893000 audit: BPF prog-id=47 op=LOAD Jan 22 01:02:12.893000 audit: BPF prog-id=33 op=UNLOAD Jan 22 01:02:12.893000 audit: BPF prog-id=48 op=LOAD Jan 22 01:02:12.893000 audit: BPF prog-id=49 op=LOAD Jan 22 01:02:12.893000 audit: BPF prog-id=34 op=UNLOAD Jan 22 01:02:12.893000 audit: BPF prog-id=35 op=UNLOAD Jan 22 01:02:12.895000 audit: BPF prog-id=50 op=LOAD Jan 22 01:02:12.895000 audit: BPF prog-id=51 op=LOAD Jan 22 01:02:12.895000 audit: BPF prog-id=28 op=UNLOAD Jan 22 01:02:12.895000 audit: BPF prog-id=29 op=UNLOAD Jan 22 01:02:12.898000 audit: BPF prog-id=52 op=LOAD Jan 22 01:02:12.898000 audit: BPF prog-id=30 op=UNLOAD Jan 22 01:02:12.898000 audit: BPF prog-id=53 op=LOAD Jan 22 01:02:12.898000 audit: BPF prog-id=54 op=LOAD Jan 22 01:02:12.898000 audit: BPF prog-id=31 op=UNLOAD Jan 22 01:02:12.898000 audit: BPF prog-id=32 op=UNLOAD Jan 22 01:02:12.899000 audit: BPF prog-id=55 op=LOAD Jan 22 01:02:12.899000 audit: BPF prog-id=39 op=UNLOAD Jan 22 01:02:12.901000 audit: BPF prog-id=56 op=LOAD Jan 22 01:02:12.901000 audit: BPF prog-id=57 op=LOAD Jan 22 01:02:12.901000 audit: BPF prog-id=40 op=UNLOAD Jan 22 01:02:12.901000 audit: BPF prog-id=41 op=UNLOAD Jan 22 01:02:12.913447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 01:02:12.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:13.016134 systemd[1]: Finished ensure-sysext.service. Jan 22 01:02:13.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:13.032172 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 01:02:13.035396 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 01:02:13.104520 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 22 01:02:13.120765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 01:02:13.231081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 01:02:13.251949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 01:02:13.264032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 01:02:13.285213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 01:02:13.292000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 22 01:02:13.301253 kernel: kauditd_printk_skb: 117 callbacks suppressed Jan 22 01:02:13.301301 kernel: audit: type=1305 audit(1769043733.292:212): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 22 01:02:13.295113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 01:02:13.301429 augenrules[1538]: No rules Jan 22 01:02:13.295498 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 01:02:13.292000 audit[1538]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd66fa2af0 a2=420 a3=0 items=0 ppid=1513 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:13.340766 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 22 01:02:13.372690 kernel: audit: type=1300 audit(1769043733.292:212): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd66fa2af0 a2=420 a3=0 items=0 ppid=1513 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:13.372787 kernel: audit: type=1327 audit(1769043733.292:212): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 01:02:13.292000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 01:02:13.380507 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 22 01:02:13.398712 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 01:02:13.409440 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 22 01:02:13.435696 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 01:02:13.462815 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 22 01:02:13.499776 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 22 01:02:13.529168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 01:02:13.560963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 01:02:13.577342 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 01:02:13.578081 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 01:02:13.581372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 01:02:13.581846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 01:02:13.599356 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 01:02:13.599850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 01:02:13.600524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 01:02:13.601356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 01:02:13.616231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 01:02:13.616841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 01:02:13.627180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 22 01:02:13.677856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 01:02:13.678486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 01:02:13.682016 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 22 01:02:13.695116 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 22 01:02:13.762438 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 22 01:02:13.768474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 22 01:02:13.964246 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 22 01:02:13.964856 systemd[1]: Reached target time-set.target - System Time Set. Jan 22 01:02:13.984956 systemd-networkd[1547]: lo: Link UP Jan 22 01:02:13.984972 systemd-networkd[1547]: lo: Gained carrier Jan 22 01:02:13.996843 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 01:02:13.999272 systemd-networkd[1547]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 01:02:13.999329 systemd-networkd[1547]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 01:02:14.007144 systemd-networkd[1547]: eth0: Link UP Jan 22 01:02:14.010776 systemd-networkd[1547]: eth0: Gained carrier Jan 22 01:02:14.010982 systemd-networkd[1547]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 01:02:14.168250 systemd-networkd[1547]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 22 01:02:14.170728 systemd-timesyncd[1548]: Network configuration changed, trying to establish connection. Jan 22 01:02:15.210453 systemd-timesyncd[1548]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 22 01:02:15.210532 systemd-timesyncd[1548]: Initial clock synchronization to Thu 2026-01-22 01:02:15.210144 UTC. Jan 22 01:02:15.214019 systemd-resolved[1312]: Clock change detected. Flushing caches. Jan 22 01:02:15.492913 kernel: kvm_amd: TSC scaling supported Jan 22 01:02:15.493014 kernel: kvm_amd: Nested Virtualization enabled Jan 22 01:02:15.493036 kernel: kvm_amd: Nested Paging enabled Jan 22 01:02:15.493055 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 22 01:02:15.493074 kernel: kvm_amd: PMU virtualization is disabled Jan 22 01:02:15.539766 systemd[1]: Reached target network.target - Network. Jan 22 01:02:15.543239 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 22 01:02:15.562870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 22 01:02:15.578355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 01:02:15.663355 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 22 01:02:16.332757 ldconfig[1544]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 22 01:02:16.357536 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 22 01:02:16.380517 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 22 01:02:16.433790 systemd-networkd[1547]: eth0: Gained IPv6LL Jan 22 01:02:16.435648 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 22 01:02:16.456542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 22 01:02:16.477044 systemd[1]: Reached target network-online.target - Network is Online. Jan 22 01:02:16.488142 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 01:02:16.502329 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 22 01:02:16.517796 kernel: EDAC MC: Ver: 3.0.0 Jan 22 01:02:16.518935 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 22 01:02:16.538783 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 22 01:02:16.549982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 22 01:02:16.565129 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 22 01:02:16.572336 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 22 01:02:16.581263 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 22 01:02:16.591597 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 22 01:02:16.599799 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 22 01:02:16.599909 systemd[1]: Reached target paths.target - Path Units. Jan 22 01:02:16.605334 systemd[1]: Reached target timers.target - Timer Units. Jan 22 01:02:16.614640 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 22 01:02:16.626978 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 22 01:02:16.638627 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 22 01:02:16.647130 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 22 01:02:16.670833 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 22 01:02:16.686644 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 22 01:02:16.698215 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 22 01:02:16.708153 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 22 01:02:16.718065 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 01:02:16.731198 systemd[1]: Reached target basic.target - Basic System. Jan 22 01:02:16.738012 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 22 01:02:16.739890 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 22 01:02:16.748091 systemd[1]: Starting containerd.service - containerd container runtime... Jan 22 01:02:16.766322 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 22 01:02:16.793768 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 22 01:02:16.807895 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 22 01:02:16.823045 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 22 01:02:16.841204 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 22 01:02:16.851058 jq[1587]: false Jan 22 01:02:16.851809 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 22 01:02:16.860066 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 22 01:02:16.890063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:02:16.913288 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing passwd entry cache Jan 22 01:02:16.913271 oslogin_cache_refresh[1589]: Refreshing passwd entry cache Jan 22 01:02:16.914604 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 22 01:02:16.933570 extend-filesystems[1588]: Found /dev/vda6 Jan 22 01:02:16.928293 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 22 01:02:16.955592 extend-filesystems[1588]: Found /dev/vda9 Jan 22 01:02:16.961674 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting users, quitting Jan 22 01:02:16.961674 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 01:02:16.961674 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing group entry cache Jan 22 01:02:16.937566 oslogin_cache_refresh[1589]: Failure getting users, quitting Jan 22 01:02:16.937523 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 22 01:02:16.937594 oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 01:02:16.937660 oslogin_cache_refresh[1589]: Refreshing group entry cache Jan 22 01:02:16.964348 extend-filesystems[1588]: Checking size of /dev/vda9 Jan 22 01:02:16.974136 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting groups, quitting Jan 22 01:02:16.974136 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 01:02:16.964858 oslogin_cache_refresh[1589]: Failure getting groups, quitting Jan 22 01:02:16.964879 oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 01:02:16.975604 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 22 01:02:17.000109 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 22 01:02:17.031783 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 22 01:02:17.052264 extend-filesystems[1588]: Resized partition /dev/vda9 Jan 22 01:02:17.083833 extend-filesystems[1616]: resize2fs 1.47.3 (8-Jul-2025) Jan 22 01:02:17.091352 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 22 01:02:17.058906 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 22 01:02:17.059864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 22 01:02:17.070315 systemd[1]: Starting update-engine.service - Update Engine... Jan 22 01:02:17.110883 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 22 01:02:17.132242 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 22 01:02:17.144108 update_engine[1618]: I20260122 01:02:17.144011 1618 main.cc:92] Flatcar Update Engine starting Jan 22 01:02:17.144137 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 22 01:02:17.146969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 22 01:02:17.147610 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 22 01:02:17.148066 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 22 01:02:17.167773 systemd[1]: motdgen.service: Deactivated successfully. Jan 22 01:02:17.168320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 22 01:02:17.175598 jq[1624]: true Jan 22 01:02:17.182008 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 22 01:02:17.204993 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 22 01:02:17.208056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 22 01:02:17.208681 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 22 01:02:17.255520 extend-filesystems[1616]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 22 01:02:17.255520 extend-filesystems[1616]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 22 01:02:17.255520 extend-filesystems[1616]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 22 01:02:17.288960 extend-filesystems[1588]: Resized filesystem in /dev/vda9 Jan 22 01:02:17.288193 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 22 01:02:17.301512 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 22 01:02:17.315609 jq[1635]: true Jan 22 01:02:17.317323 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 22 01:02:17.318017 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 22 01:02:17.379478 tar[1633]: linux-amd64/LICENSE Jan 22 01:02:17.381614 tar[1633]: linux-amd64/helm Jan 22 01:02:17.409072 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 22 01:02:17.455838 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Jan 22 01:02:17.455938 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 22 01:02:17.464345 systemd-logind[1609]: New seat seat0. Jan 22 01:02:17.468876 systemd[1]: Started systemd-logind.service - User Login Management. Jan 22 01:02:17.527306 dbus-daemon[1585]: [system] SELinux support is enabled Jan 22 01:02:17.531229 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 22 01:02:17.546681 bash[1671]: Updated "/home/core/.ssh/authorized_keys" Jan 22 01:02:17.550484 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 22 01:02:17.566009 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 22 01:02:17.566227 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 22 01:02:17.566550 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 22 01:02:17.575781 update_engine[1618]: I20260122 01:02:17.569875 1618 update_check_scheduler.cc:74] Next update check in 2m54s Jan 22 01:02:17.575189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 22 01:02:17.575215 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 22 01:02:17.576325 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 22 01:02:17.585235 systemd[1]: Started update-engine.service - Update Engine. Jan 22 01:02:17.612951 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 22 01:02:17.851876 locksmithd[1679]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 22 01:02:17.876331 containerd[1637]: time="2026-01-22T01:02:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 22 01:02:17.879865 containerd[1637]: time="2026-01-22T01:02:17.879834389Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 22 01:02:17.898548 containerd[1637]: time="2026-01-22T01:02:17.898024581Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.821µs" Jan 22 01:02:17.903305 containerd[1637]: time="2026-01-22T01:02:17.903221742Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 22 01:02:17.903490 containerd[1637]: time="2026-01-22T01:02:17.903334933Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 22 01:02:17.903490 containerd[1637]: time="2026-01-22T01:02:17.903358988Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 22 01:02:17.903943 containerd[1637]: time="2026-01-22T01:02:17.903840016Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 22 01:02:17.903986 containerd[1637]: time="2026-01-22T01:02:17.903939111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 01:02:17.904639 containerd[1637]: time="2026-01-22T01:02:17.904042564Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 01:02:17.904639 containerd[1637]: time="2026-01-22T01:02:17.904070386Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 01:02:17.904639 containerd[1637]: time="2026-01-22T01:02:17.904530515Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 01:02:17.904639 containerd[1637]: time="2026-01-22T01:02:17.904559028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 01:02:17.904639 containerd[1637]: time="2026-01-22T01:02:17.904581150Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 01:02:17.904639 containerd[1637]: time="2026-01-22T01:02:17.904597430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 01:02:17.905327 containerd[1637]: time="2026-01-22T01:02:17.904931043Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 01:02:17.905327 containerd[1637]: time="2026-01-22T01:02:17.904959846Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 22 01:02:17.905327 containerd[1637]: time="2026-01-22T01:02:17.905095349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 22 01:02:17.905592 containerd[1637]: time="2026-01-22T01:02:17.905553785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 01:02:17.905630 containerd[1637]: time="2026-01-22T01:02:17.905608788Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 01:02:17.905661 containerd[1637]: time="2026-01-22T01:02:17.905629276Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 22 01:02:17.905689 containerd[1637]: time="2026-01-22T01:02:17.905666675Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 22 01:02:17.907107 containerd[1637]: time="2026-01-22T01:02:17.906112638Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 22 01:02:17.907107 containerd[1637]: time="2026-01-22T01:02:17.906210310Z" level=info msg="metadata content store policy set" policy=shared Jan 22 01:02:17.910853 sshd_keygen[1619]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 22 01:02:17.928316 containerd[1637]: time="2026-01-22T01:02:17.928257704Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 22 01:02:17.928316 containerd[1637]: time="2026-01-22T01:02:17.928326172Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928597679Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928617115Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928647642Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928763278Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928798544Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928813763Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928829060Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928844510Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928865148Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 22 01:02:17.928881 containerd[1637]: time="2026-01-22T01:02:17.928879615Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 22 01:02:17.929096 containerd[1637]: time="2026-01-22T01:02:17.928895504Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 22 01:02:17.929096 containerd[1637]: time="2026-01-22T01:02:17.928912046Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 22 01:02:17.929096 containerd[1637]: time="2026-01-22T01:02:17.929059160Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 22 01:02:17.929096 containerd[1637]: time="2026-01-22T01:02:17.929081833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 22 01:02:17.929096 containerd[1637]: time="2026-01-22T01:02:17.929098684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929112810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929126146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929141123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929156702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929170538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929183693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929197819Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929210252Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 22 01:02:17.929259 containerd[1637]: time="2026-01-22T01:02:17.929237022Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 22 01:02:17.929670 containerd[1637]: time="2026-01-22T01:02:17.929282266Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 22 01:02:17.929670 containerd[1637]: time="2026-01-22T01:02:17.929298467Z" level=info msg="Start snapshots syncer" Jan 22 01:02:17.929670 containerd[1637]: time="2026-01-22T01:02:17.929338752Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 22 01:02:17.930019 containerd[1637]: time="2026-01-22T01:02:17.929797889Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 22 01:02:17.930019 containerd[1637]: time="2026-01-22T01:02:17.929874692Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.929932791Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930048828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930068644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930086708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930103790Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930118738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930133205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930146380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930159033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930171947Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930208465Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930224285Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 01:02:17.930313 containerd[1637]: time="2026-01-22T01:02:17.930235556Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930247538Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930258709Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930273146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930286220Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930301960Z" level=info msg="runtime interface created" Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930311648Z" level=info msg="created NRI interface" Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930323541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930338318Z" level=info msg="Connect containerd service" Jan 22 01:02:17.931079 containerd[1637]: time="2026-01-22T01:02:17.930360610Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 22 01:02:17.934002 containerd[1637]: time="2026-01-22T01:02:17.931617786Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 22 01:02:17.975228 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 22 01:02:17.994919 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 22 01:02:18.071134 systemd[1]: issuegen.service: Deactivated successfully. Jan 22 01:02:18.072063 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 22 01:02:18.106867 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 22 01:02:18.177608 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 22 01:02:18.192995 tar[1633]: linux-amd64/README.md Jan 22 01:02:18.209072 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 22 01:02:18.225027 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 22 01:02:18.249224 systemd[1]: Reached target getty.target - Login Prompts. Jan 22 01:02:18.263308 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 22 01:02:18.330020 containerd[1637]: time="2026-01-22T01:02:18.329966005Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 22 01:02:18.330259 containerd[1637]: time="2026-01-22T01:02:18.330233564Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 22 01:02:18.330492 containerd[1637]: time="2026-01-22T01:02:18.330354059Z" level=info msg="Start subscribing containerd event" Jan 22 01:02:18.330609 containerd[1637]: time="2026-01-22T01:02:18.330573719Z" level=info msg="Start recovering state" Jan 22 01:02:18.330879 containerd[1637]: time="2026-01-22T01:02:18.330853261Z" level=info msg="Start event monitor" Jan 22 01:02:18.330965 containerd[1637]: time="2026-01-22T01:02:18.330944251Z" level=info msg="Start cni network conf syncer for default" Jan 22 01:02:18.331262 containerd[1637]: time="2026-01-22T01:02:18.331017939Z" level=info msg="Start streaming server" Jan 22 01:02:18.331262 containerd[1637]: time="2026-01-22T01:02:18.331105522Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 22 01:02:18.331262 containerd[1637]: time="2026-01-22T01:02:18.331120901Z" level=info msg="runtime interface starting up..." Jan 22 01:02:18.331262 containerd[1637]: time="2026-01-22T01:02:18.331129607Z" level=info msg="starting plugins..." Jan 22 01:02:18.331262 containerd[1637]: time="2026-01-22T01:02:18.331154303Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 22 01:02:18.331563 containerd[1637]: time="2026-01-22T01:02:18.331534162Z" level=info msg="containerd successfully booted in 0.455742s" Jan 22 01:02:18.332278 systemd[1]: Started containerd.service - containerd container runtime. Jan 22 01:02:19.681650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:02:19.704807 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 22 01:02:19.726830 systemd[1]: Startup finished in 15.617s (kernel) + 18.779s (initrd) + 12.682s (userspace) = 47.079s. Jan 22 01:02:19.732877 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 01:02:21.411566 kubelet[1723]: E0122 01:02:21.411016 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 01:02:21.422950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 01:02:21.423293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 01:02:21.427248 systemd[1]: kubelet.service: Consumed 1.566s CPU time, 269.3M memory peak. Jan 22 01:02:25.836297 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 22 01:02:25.840102 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:35844.service - OpenSSH per-connection server daemon (10.0.0.1:35844). Jan 22 01:02:26.094567 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 35844 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:02:26.098855 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:02:26.117268 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 22 01:02:26.120293 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 22 01:02:26.142588 systemd-logind[1609]: New session 1 of user core. Jan 22 01:02:26.181719 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 22 01:02:26.191593 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 22 01:02:26.219336 (systemd)[1743]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 22 01:02:26.232156 systemd-logind[1609]: New session c1 of user core. Jan 22 01:02:26.556095 systemd[1743]: Queued start job for default target default.target. Jan 22 01:02:26.583865 systemd[1743]: Created slice app.slice - User Application Slice. Jan 22 01:02:26.583981 systemd[1743]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 22 01:02:26.584006 systemd[1743]: Reached target paths.target - Paths. Jan 22 01:02:26.585657 systemd[1743]: Reached target timers.target - Timers. Jan 22 01:02:26.590245 systemd[1743]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 22 01:02:26.592183 systemd[1743]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 22 01:02:26.641972 systemd[1743]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 22 01:02:26.642146 systemd[1743]: Reached target sockets.target - Sockets. Jan 22 01:02:26.645902 systemd[1743]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 22 01:02:26.646161 systemd[1743]: Reached target basic.target - Basic System. Jan 22 01:02:26.646303 systemd[1743]: Reached target default.target - Main User Target. Jan 22 01:02:26.646360 systemd[1743]: Startup finished in 391ms. Jan 22 01:02:26.646691 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 22 01:02:26.667873 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 22 01:02:26.701874 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:35856.service - OpenSSH per-connection server daemon (10.0.0.1:35856). Jan 22 01:02:26.805158 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:02:26.809000 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:02:26.823592 systemd-logind[1609]: New session 2 of user core. Jan 22 01:02:26.838870 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 22 01:02:26.873151 sshd[1759]: Connection closed by 10.0.0.1 port 35856 Jan 22 01:02:26.873669 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 22 01:02:26.895630 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:35856.service: Deactivated successfully. Jan 22 01:02:26.898635 systemd[1]: session-2.scope: Deactivated successfully. Jan 22 01:02:26.900468 systemd-logind[1609]: Session 2 logged out. Waiting for processes to exit. Jan 22 01:02:26.906563 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:35866.service - OpenSSH per-connection server daemon (10.0.0.1:35866). Jan 22 01:02:26.909130 systemd-logind[1609]: Removed session 2. Jan 22 01:02:27.001205 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 35866 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:02:27.004762 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:02:27.019573 systemd-logind[1609]: New session 3 of user core. Jan 22 01:02:27.035598 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 22 01:02:27.072920 sshd[1768]: Connection closed by 10.0.0.1 port 35866 Jan 22 01:02:27.074312 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 22 01:02:27.092334 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:35866.service: Deactivated successfully. Jan 22 01:02:27.095302 systemd[1]: session-3.scope: Deactivated successfully. Jan 22 01:02:27.104007 systemd-logind[1609]: Session 3 logged out. Waiting for processes to exit. Jan 22 01:02:27.109063 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:35880.service - OpenSSH per-connection server daemon (10.0.0.1:35880). Jan 22 01:02:27.112224 systemd-logind[1609]: Removed session 3. Jan 22 01:02:27.217015 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 35880 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:02:27.220899 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:02:27.242674 systemd-logind[1609]: New session 4 of user core. Jan 22 01:02:27.253891 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 22 01:02:27.307962 sshd[1777]: Connection closed by 10.0.0.1 port 35880 Jan 22 01:02:27.306031 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jan 22 01:02:27.330297 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:35880.service: Deactivated successfully. Jan 22 01:02:27.335258 systemd[1]: session-4.scope: Deactivated successfully. Jan 22 01:02:27.338270 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Jan 22 01:02:27.348248 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:35896.service - OpenSSH per-connection server daemon (10.0.0.1:35896). Jan 22 01:02:27.349704 systemd-logind[1609]: Removed session 4. Jan 22 01:02:27.477163 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 35896 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:02:27.482632 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:02:27.514529 systemd-logind[1609]: New session 5 of user core. Jan 22 01:02:27.536655 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 22 01:02:27.618569 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 22 01:02:27.619168 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 01:02:27.671285 sudo[1787]: pam_unix(sudo:session): session closed for user root Jan 22 01:02:27.686150 sshd[1786]: Connection closed by 10.0.0.1 port 35896 Jan 22 01:02:27.685281 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jan 22 01:02:27.710732 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:35896.service: Deactivated successfully. Jan 22 01:02:27.713983 systemd[1]: session-5.scope: Deactivated successfully. Jan 22 01:02:27.718519 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Jan 22 01:02:27.725627 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:35900.service - OpenSSH per-connection server daemon (10.0.0.1:35900). Jan 22 01:02:27.727652 systemd-logind[1609]: Removed session 5. Jan 22 01:02:27.867329 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 35900 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:02:27.871014 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:02:27.899875 systemd-logind[1609]: New session 6 of user core. Jan 22 01:02:27.911140 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 22 01:02:27.955936 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 22 01:02:27.956619 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 01:02:28.005133 sudo[1798]: pam_unix(sudo:session): session closed for user root Jan 22 01:02:28.026353 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 22 01:02:28.028742 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 01:02:28.067223 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 01:02:28.192000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 01:02:28.196079 augenrules[1820]: No rules Jan 22 01:02:28.197636 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 01:02:28.198269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 01:02:28.206194 sudo[1797]: pam_unix(sudo:session): session closed for user root Jan 22 01:02:28.192000 audit[1820]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc79701c20 a2=420 a3=0 items=0 ppid=1801 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:28.213236 sshd[1796]: Connection closed by 10.0.0.1 port 35900 Jan 22 01:02:28.214689 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jan 22 01:02:28.240077 kernel: audit: type=1305 audit(1769043748.192:213): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 01:02:28.240189 kernel: audit: type=1300 audit(1769043748.192:213): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc79701c20 a2=420 a3=0 items=0 ppid=1801 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:28.240225 kernel: audit: type=1327 audit(1769043748.192:213): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 01:02:28.192000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 01:02:28.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.254015 kernel: audit: type=1130 audit(1769043748.198:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.287643 kernel: audit: type=1131 audit(1769043748.198:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.287743 kernel: audit: type=1106 audit(1769043748.205:216): pid=1797 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.205000 audit[1797]: USER_END pid=1797 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.308889 kernel: audit: type=1104 audit(1769043748.205:217): pid=1797 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.205000 audit[1797]: CRED_DISP pid=1797 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.327592 kernel: audit: type=1106 audit(1769043748.218:218): pid=1793 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.218000 audit[1793]: USER_END pid=1793 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.219000 audit[1793]: CRED_DISP pid=1793 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.372949 kernel: audit: type=1104 audit(1769043748.219:219): pid=1793 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.386864 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:35900.service: Deactivated successfully. Jan 22 01:02:28.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.144:22-10.0.0.1:35900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.390869 systemd[1]: session-6.scope: Deactivated successfully. Jan 22 01:02:28.399964 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Jan 22 01:02:28.404702 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:35916.service - OpenSSH per-connection server daemon (10.0.0.1:35916). Jan 22 01:02:28.409892 systemd-logind[1609]: Removed session 6. Jan 22 01:02:28.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.144:22-10.0.0.1:35916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.420131 kernel: audit: type=1131 audit(1769043748.386:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.144:22-10.0.0.1:35900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.497000 audit[1829]: USER_ACCT pid=1829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.501348 sshd[1829]: Accepted publickey for core from 10.0.0.1 port 35916 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:02:28.501000 audit[1829]: CRED_ACQ pid=1829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.501000 audit[1829]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe677ccaf0 a2=3 a3=0 items=0 ppid=1 pid=1829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:28.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:02:28.502706 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:02:28.515351 systemd-logind[1609]: New session 7 of user core. Jan 22 01:02:28.535929 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 22 01:02:28.543000 audit[1829]: USER_START pid=1829 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.548000 audit[1832]: CRED_ACQ pid=1832 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:02:28.569000 audit[1833]: USER_ACCT pid=1833 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.570693 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 22 01:02:28.571000 audit[1833]: CRED_REFR pid=1833 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:02:28.572209 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 01:02:28.578000 audit[1833]: USER_START pid=1833 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:02:29.474684 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 22 01:02:29.497360 (dockerd)[1853]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 22 01:02:30.136766 dockerd[1853]: time="2026-01-22T01:02:30.136571849Z" level=info msg="Starting up" Jan 22 01:02:30.147430 dockerd[1853]: time="2026-01-22T01:02:30.147223450Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 22 01:02:30.202020 dockerd[1853]: time="2026-01-22T01:02:30.201859313Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 22 01:02:30.363511 dockerd[1853]: time="2026-01-22T01:02:30.363320510Z" level=info msg="Loading containers: start." Jan 22 01:02:30.403676 kernel: Initializing XFRM netlink socket Jan 22 01:02:30.660000 audit[1905]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.660000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc75deee70 a2=0 a3=0 items=0 ppid=1853 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 01:02:30.669000 audit[1907]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.669000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe654f2060 a2=0 a3=0 items=0 ppid=1853 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.669000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 01:02:30.677000 audit[1909]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.677000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbd2b1130 a2=0 a3=0 items=0 ppid=1853 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.677000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 01:02:30.693000 audit[1911]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.693000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff18fc32f0 a2=0 a3=0 items=0 ppid=1853 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 01:02:30.702000 audit[1913]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.702000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdab3a6c00 a2=0 a3=0 items=0 ppid=1853 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 01:02:30.710000 audit[1915]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.710000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe805599c0 a2=0 a3=0 items=0 ppid=1853 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 01:02:30.719000 audit[1917]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.719000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffca94a2b90 a2=0 a3=0 items=0 ppid=1853 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 01:02:30.728000 audit[1919]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.728000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe63463a80 a2=0 a3=0 items=0 ppid=1853 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 01:02:30.867000 audit[1922]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.867000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc7fbd2280 a2=0 a3=0 items=0 ppid=1853 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.867000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 22 01:02:30.877000 audit[1924]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.877000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc1c752b30 a2=0 a3=0 items=0 ppid=1853 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.877000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 01:02:30.903000 audit[1926]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.903000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe723e1e10 a2=0 a3=0 items=0 ppid=1853 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.903000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 01:02:30.914000 audit[1928]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.914000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd508111d0 a2=0 a3=0 items=0 ppid=1853 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 01:02:30.923000 audit[1930]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:30.923000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff1a8bdac0 a2=0 a3=0 items=0 ppid=1853 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:30.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 01:02:31.161000 audit[1960]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.161000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe43183740 a2=0 a3=0 items=0 ppid=1853 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 01:02:31.173000 audit[1962]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.173000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe3f2c33f0 a2=0 a3=0 items=0 ppid=1853 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 01:02:31.198000 audit[1964]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.198000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe25df1760 a2=0 a3=0 items=0 ppid=1853 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 01:02:31.211000 audit[1966]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.211000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd10583210 a2=0 a3=0 items=0 ppid=1853 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.211000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 01:02:31.218000 audit[1968]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.218000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd36af8d90 a2=0 a3=0 items=0 ppid=1853 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 01:02:31.232000 audit[1970]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.232000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde11e81b0 a2=0 a3=0 items=0 ppid=1853 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 01:02:31.243000 audit[1972]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.243000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd6d356350 a2=0 a3=0 items=0 ppid=1853 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 01:02:31.257000 audit[1974]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.257000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff878d6260 a2=0 a3=0 items=0 ppid=1853 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 01:02:31.272000 audit[1976]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.272000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd39452b30 a2=0 a3=0 items=0 ppid=1853 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 22 01:02:31.280000 audit[1978]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.280000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdb3e8b5d0 a2=0 a3=0 items=0 ppid=1853 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.280000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 01:02:31.292000 audit[1980]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.292000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe12b236d0 a2=0 a3=0 items=0 ppid=1853 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 01:02:31.303000 audit[1982]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.303000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff4efdebb0 a2=0 a3=0 items=0 ppid=1853 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 01:02:31.317000 audit[1984]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.317000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdb8b464b0 a2=0 a3=0 items=0 ppid=1853 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.317000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 01:02:31.341000 audit[1989]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.341000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea8f543e0 a2=0 a3=0 items=0 ppid=1853 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 01:02:31.354000 audit[1991]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.354000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcca2833e0 a2=0 a3=0 items=0 ppid=1853 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 01:02:31.364000 audit[1993]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.364000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff08134040 a2=0 a3=0 items=0 ppid=1853 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 01:02:31.376000 audit[1995]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.376000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe6167c390 a2=0 a3=0 items=0 ppid=1853 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.376000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 01:02:31.384000 audit[1997]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.384000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff8b91d670 a2=0 a3=0 items=0 ppid=1853 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 01:02:31.396000 audit[1999]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:02:31.396000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffda076a5f0 a2=0 a3=0 items=0 ppid=1853 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.396000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 01:02:31.454106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 22 01:02:31.462971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:02:31.484000 audit[2005]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.484000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffeb12cf390 a2=0 a3=0 items=0 ppid=1853 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 22 01:02:31.494000 audit[2007]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.494000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd6a355a50 a2=0 a3=0 items=0 ppid=1853 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 22 01:02:31.551000 audit[2017]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.551000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd8fde18f0 a2=0 a3=0 items=0 ppid=1853 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 22 01:02:31.602000 audit[2023]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.602000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe771d8b00 a2=0 a3=0 items=0 ppid=1853 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 22 01:02:31.611000 audit[2025]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.611000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd2f26e650 a2=0 a3=0 items=0 ppid=1853 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 22 01:02:31.621000 audit[2027]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.621000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd3f015de0 a2=0 a3=0 items=0 ppid=1853 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 22 01:02:31.633000 audit[2029]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.633000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc003a8be0 a2=0 a3=0 items=0 ppid=1853 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 01:02:31.644000 audit[2031]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:02:31.644000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcd3738bd0 a2=0 a3=0 items=0 ppid=1853 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:02:31.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 22 01:02:31.649242 systemd-networkd[1547]: docker0: Link UP Jan 22 01:02:31.784735 dockerd[1853]: time="2026-01-22T01:02:31.784573864Z" level=info msg="Loading containers: done." Jan 22 01:02:31.825621 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1707956248-merged.mount: Deactivated successfully. Jan 22 01:02:31.835532 dockerd[1853]: time="2026-01-22T01:02:31.835141463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 22 01:02:31.835532 dockerd[1853]: time="2026-01-22T01:02:31.835216283Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 22 01:02:31.835532 dockerd[1853]: time="2026-01-22T01:02:31.835301613Z" level=info msg="Initializing buildkit" Jan 22 01:02:31.836962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:02:31.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:31.864565 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 01:02:31.980140 dockerd[1853]: time="2026-01-22T01:02:31.979345109Z" level=info msg="Completed buildkit initialization" Jan 22 01:02:31.982537 kubelet[2043]: E0122 01:02:31.982345 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 01:02:31.988931 dockerd[1853]: time="2026-01-22T01:02:31.988620542Z" level=info msg="Daemon has completed initialization" Jan 22 01:02:31.989104 dockerd[1853]: time="2026-01-22T01:02:31.989070682Z" level=info msg="API listen on /run/docker.sock" Jan 22 01:02:31.989678 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 22 01:02:31.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:31.996121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 01:02:31.996520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 01:02:31.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:02:31.997257 systemd[1]: kubelet.service: Consumed 354ms CPU time, 111.4M memory peak. Jan 22 01:02:33.736495 containerd[1637]: time="2026-01-22T01:02:33.735740545Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 22 01:02:35.120999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702114702.mount: Deactivated successfully. Jan 22 01:02:40.083765 containerd[1637]: time="2026-01-22T01:02:40.082151283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:40.093962 containerd[1637]: time="2026-01-22T01:02:40.093839249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28595692" Jan 22 01:02:40.111721 containerd[1637]: time="2026-01-22T01:02:40.107773027Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:40.119850 containerd[1637]: time="2026-01-22T01:02:40.119535612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:40.126032 containerd[1637]: time="2026-01-22T01:02:40.125801418Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 6.390004197s" Jan 22 01:02:40.126032 containerd[1637]: time="2026-01-22T01:02:40.125845129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 22 01:02:40.133227 containerd[1637]: time="2026-01-22T01:02:40.133180431Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 22 01:02:42.159860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 22 01:02:42.172056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:02:42.742778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:02:42.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:42.758246 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 22 01:02:42.761650 kernel: audit: type=1130 audit(1769043762.744:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:42.782110 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 01:02:42.957805 kubelet[2158]: E0122 01:02:42.957318 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 01:02:42.964685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 01:02:42.966604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 01:02:42.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:02:42.970546 systemd[1]: kubelet.service: Consumed 363ms CPU time, 110.8M memory peak. Jan 22 01:02:42.992550 kernel: audit: type=1131 audit(1769043762.969:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:02:46.949054 containerd[1637]: time="2026-01-22T01:02:46.946883988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:46.953131 containerd[1637]: time="2026-01-22T01:02:46.952897927Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 22 01:02:46.959058 containerd[1637]: time="2026-01-22T01:02:46.958716035Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:46.968050 containerd[1637]: time="2026-01-22T01:02:46.966739090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:46.973173 containerd[1637]: time="2026-01-22T01:02:46.971819531Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 6.838473641s" Jan 22 01:02:46.973173 containerd[1637]: time="2026-01-22T01:02:46.971864294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 22 01:02:46.975110 containerd[1637]: time="2026-01-22T01:02:46.973706706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 22 01:02:51.601715 containerd[1637]: time="2026-01-22T01:02:51.601510985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:51.605589 containerd[1637]: time="2026-01-22T01:02:51.604747404Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 22 01:02:51.609525 containerd[1637]: time="2026-01-22T01:02:51.607711557Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:51.618265 containerd[1637]: time="2026-01-22T01:02:51.613488434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:51.623313 containerd[1637]: time="2026-01-22T01:02:51.621091946Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 4.64726784s" Jan 22 01:02:51.623313 containerd[1637]: time="2026-01-22T01:02:51.621212816Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 22 01:02:51.623313 containerd[1637]: time="2026-01-22T01:02:51.622502032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 22 01:02:53.158565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 22 01:02:53.163758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:02:53.589138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:02:53.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:53.628476 kernel: audit: type=1130 audit(1769043773.587:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:02:53.634559 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 01:02:53.779137 kubelet[2184]: E0122 01:02:53.777205 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 01:02:53.785743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 01:02:53.786157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 01:02:53.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:02:53.787064 systemd[1]: kubelet.service: Consumed 356ms CPU time, 110.5M memory peak. Jan 22 01:02:53.801537 kernel: audit: type=1131 audit(1769043773.786:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:02:53.828538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370139336.mount: Deactivated successfully. Jan 22 01:02:56.563669 containerd[1637]: time="2026-01-22T01:02:56.562295199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:56.567072 containerd[1637]: time="2026-01-22T01:02:56.566736389Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31927016" Jan 22 01:02:56.571080 containerd[1637]: time="2026-01-22T01:02:56.570669701Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:56.574946 containerd[1637]: time="2026-01-22T01:02:56.574848288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:02:56.576087 containerd[1637]: time="2026-01-22T01:02:56.576028204Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.953491707s" Jan 22 01:02:56.576582 containerd[1637]: time="2026-01-22T01:02:56.576091465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 22 01:02:56.578647 containerd[1637]: time="2026-01-22T01:02:56.577981590Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 22 01:02:57.703764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759825072.mount: Deactivated successfully. Jan 22 01:03:00.505478 containerd[1637]: time="2026-01-22T01:03:00.504051690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:00.508109 containerd[1637]: time="2026-01-22T01:03:00.507767768Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20381254" Jan 22 01:03:00.512168 containerd[1637]: time="2026-01-22T01:03:00.512008992Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:00.519714 containerd[1637]: time="2026-01-22T01:03:00.519677622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:00.521569 containerd[1637]: time="2026-01-22T01:03:00.521471391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.943448212s" Jan 22 01:03:00.521829 containerd[1637]: time="2026-01-22T01:03:00.521662271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 22 01:03:00.523925 containerd[1637]: time="2026-01-22T01:03:00.523817547Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 22 01:03:01.244195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506936588.mount: Deactivated successfully. Jan 22 01:03:01.259880 containerd[1637]: time="2026-01-22T01:03:01.259706820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 01:03:01.264822 containerd[1637]: time="2026-01-22T01:03:01.264652471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 01:03:01.269685 containerd[1637]: time="2026-01-22T01:03:01.269630084Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 01:03:01.278417 containerd[1637]: time="2026-01-22T01:03:01.277350748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 01:03:01.286906 containerd[1637]: time="2026-01-22T01:03:01.285001806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 761.146759ms" Jan 22 01:03:01.286906 containerd[1637]: time="2026-01-22T01:03:01.285100743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 22 01:03:01.288346 containerd[1637]: time="2026-01-22T01:03:01.288096205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 22 01:03:02.694958 update_engine[1618]: I20260122 01:03:02.692648 1618 update_attempter.cc:509] Updating boot flags... Jan 22 01:03:03.707918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262889405.mount: Deactivated successfully. Jan 22 01:03:03.805534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 22 01:03:03.825659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:03:04.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:04.924902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:03:04.972577 kernel: audit: type=1130 audit(1769043784.924:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:04.986183 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 01:03:05.238199 kubelet[2289]: E0122 01:03:05.236852 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 01:03:05.250101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 01:03:05.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:03:05.263013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 01:03:05.277885 systemd[1]: kubelet.service: Consumed 631ms CPU time, 110.7M memory peak. Jan 22 01:03:05.286766 kernel: audit: type=1131 audit(1769043785.262:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:03:13.575963 containerd[1637]: time="2026-01-22T01:03:13.573936869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:13.575963 containerd[1637]: time="2026-01-22T01:03:13.575866000Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58916088" Jan 22 01:03:13.584213 containerd[1637]: time="2026-01-22T01:03:13.584096216Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:13.590225 containerd[1637]: time="2026-01-22T01:03:13.590136236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:13.596456 containerd[1637]: time="2026-01-22T01:03:13.591713530Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 12.303531833s" Jan 22 01:03:13.596456 containerd[1637]: time="2026-01-22T01:03:13.591858361Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 22 01:03:15.419769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 22 01:03:15.436956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:03:16.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:16.035085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:03:16.068635 kernel: audit: type=1130 audit(1769043796.034:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:16.076497 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 01:03:16.346990 kubelet[2374]: E0122 01:03:16.346027 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 01:03:16.364083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 01:03:16.365050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 01:03:16.366581 systemd[1]: kubelet.service: Consumed 486ms CPU time, 110.2M memory peak. Jan 22 01:03:16.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:03:16.391156 kernel: audit: type=1131 audit(1769043796.365:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:03:20.664888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:03:20.665570 systemd[1]: kubelet.service: Consumed 486ms CPU time, 110.2M memory peak. Jan 22 01:03:20.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:20.670270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:03:20.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:20.708743 kernel: audit: type=1130 audit(1769043800.663:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:20.708890 kernel: audit: type=1131 audit(1769043800.663:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:20.737827 systemd[1]: Reload requested from client PID 2387 ('systemctl') (unit session-7.scope)... Jan 22 01:03:20.738032 systemd[1]: Reloading... Jan 22 01:03:20.901710 zram_generator::config[2434]: No configuration found. Jan 22 01:03:21.402549 systemd[1]: Reloading finished in 663 ms. Jan 22 01:03:21.447000 audit: BPF prog-id=63 op=LOAD Jan 22 01:03:21.447000 audit: BPF prog-id=55 op=UNLOAD Jan 22 01:03:21.448000 audit: BPF prog-id=64 op=LOAD Jan 22 01:03:21.448000 audit: BPF prog-id=65 op=LOAD Jan 22 01:03:21.448000 audit: BPF prog-id=56 op=UNLOAD Jan 22 01:03:21.448000 audit: BPF prog-id=57 op=UNLOAD Jan 22 01:03:21.451000 audit: BPF prog-id=66 op=LOAD Jan 22 01:03:21.454501 kernel: audit: type=1334 audit(1769043801.447:283): prog-id=63 op=LOAD Jan 22 01:03:21.454550 kernel: audit: type=1334 audit(1769043801.447:284): prog-id=55 op=UNLOAD Jan 22 01:03:21.454644 kernel: audit: type=1334 audit(1769043801.448:285): prog-id=64 op=LOAD Jan 22 01:03:21.454683 kernel: audit: type=1334 audit(1769043801.448:286): prog-id=65 op=LOAD Jan 22 01:03:21.454720 kernel: audit: type=1334 audit(1769043801.448:287): prog-id=56 op=UNLOAD Jan 22 01:03:21.454748 kernel: audit: type=1334 audit(1769043801.448:288): prog-id=57 op=UNLOAD Jan 22 01:03:21.454775 kernel: audit: type=1334 audit(1769043801.451:289): prog-id=66 op=LOAD Jan 22 01:03:21.454795 kernel: audit: type=1334 audit(1769043801.451:290): prog-id=44 op=UNLOAD Jan 22 01:03:21.454826 kernel: audit: type=1334 audit(1769043801.451:291): prog-id=67 op=LOAD Jan 22 01:03:21.454856 kernel: audit: type=1334 audit(1769043801.451:292): prog-id=68 op=LOAD Jan 22 01:03:21.451000 audit: BPF prog-id=44 op=UNLOAD Jan 22 01:03:21.451000 audit: BPF prog-id=67 op=LOAD Jan 22 01:03:21.451000 audit: BPF prog-id=68 op=LOAD Jan 22 01:03:21.452000 audit: BPF prog-id=45 op=UNLOAD Jan 22 01:03:21.452000 audit: BPF prog-id=46 op=UNLOAD Jan 22 01:03:21.457000 audit: BPF prog-id=69 op=LOAD Jan 22 01:03:21.457000 audit: BPF prog-id=60 op=UNLOAD Jan 22 01:03:21.457000 audit: BPF prog-id=70 op=LOAD Jan 22 01:03:21.457000 audit: BPF prog-id=71 op=LOAD Jan 22 01:03:21.457000 audit: BPF prog-id=61 op=UNLOAD Jan 22 01:03:21.457000 audit: BPF prog-id=62 op=UNLOAD Jan 22 01:03:21.458000 audit: BPF prog-id=72 op=LOAD Jan 22 01:03:21.459000 audit: BPF prog-id=73 op=LOAD Jan 22 01:03:21.459000 audit: BPF prog-id=50 op=UNLOAD Jan 22 01:03:21.459000 audit: BPF prog-id=51 op=UNLOAD Jan 22 01:03:21.461000 audit: BPF prog-id=74 op=LOAD Jan 22 01:03:21.461000 audit: BPF prog-id=43 op=UNLOAD Jan 22 01:03:21.463000 audit: BPF prog-id=75 op=LOAD Jan 22 01:03:21.463000 audit: BPF prog-id=52 op=UNLOAD Jan 22 01:03:21.463000 audit: BPF prog-id=76 op=LOAD Jan 22 01:03:21.463000 audit: BPF prog-id=77 op=LOAD Jan 22 01:03:21.463000 audit: BPF prog-id=53 op=UNLOAD Jan 22 01:03:21.463000 audit: BPF prog-id=54 op=UNLOAD Jan 22 01:03:21.464000 audit: BPF prog-id=78 op=LOAD Jan 22 01:03:21.464000 audit: BPF prog-id=59 op=UNLOAD Jan 22 01:03:21.466000 audit: BPF prog-id=79 op=LOAD Jan 22 01:03:21.466000 audit: BPF prog-id=58 op=UNLOAD Jan 22 01:03:21.472000 audit: BPF prog-id=80 op=LOAD Jan 22 01:03:21.474000 audit: BPF prog-id=47 op=UNLOAD Jan 22 01:03:21.474000 audit: BPF prog-id=81 op=LOAD Jan 22 01:03:21.474000 audit: BPF prog-id=82 op=LOAD Jan 22 01:03:21.474000 audit: BPF prog-id=48 op=UNLOAD Jan 22 01:03:21.474000 audit: BPF prog-id=49 op=UNLOAD Jan 22 01:03:21.507842 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 22 01:03:21.508128 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 22 01:03:21.508807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:03:21.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 01:03:21.508997 systemd[1]: kubelet.service: Consumed 257ms CPU time, 98.4M memory peak. Jan 22 01:03:21.511861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:03:21.859869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:03:21.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:21.889331 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 01:03:22.016268 kubelet[2480]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 01:03:22.016268 kubelet[2480]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 01:03:22.016268 kubelet[2480]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 01:03:22.016268 kubelet[2480]: I0122 01:03:22.016199 2480 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 01:03:22.473454 kubelet[2480]: I0122 01:03:22.472891 2480 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 22 01:03:22.473454 kubelet[2480]: I0122 01:03:22.472996 2480 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 01:03:22.473454 kubelet[2480]: I0122 01:03:22.473251 2480 server.go:956] "Client rotation is on, will bootstrap in background" Jan 22 01:03:22.520177 kubelet[2480]: I0122 01:03:22.519903 2480 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 01:03:22.520330 kubelet[2480]: E0122 01:03:22.520280 2480 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 01:03:22.540572 kubelet[2480]: I0122 01:03:22.540535 2480 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 01:03:22.564609 kubelet[2480]: I0122 01:03:22.563804 2480 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 22 01:03:22.566170 kubelet[2480]: I0122 01:03:22.565122 2480 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 01:03:22.567088 kubelet[2480]: I0122 01:03:22.565721 2480 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 01:03:22.567088 kubelet[2480]: I0122 01:03:22.567028 2480 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 01:03:22.567088 kubelet[2480]: I0122 01:03:22.567049 2480 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 01:03:22.567554 kubelet[2480]: I0122 01:03:22.567237 2480 state_mem.go:36] "Initialized new in-memory state store" Jan 22 01:03:22.570910 kubelet[2480]: I0122 01:03:22.570747 2480 kubelet.go:480] "Attempting to sync node with API server" Jan 22 01:03:22.570910 kubelet[2480]: I0122 01:03:22.570807 2480 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 01:03:22.570910 kubelet[2480]: I0122 01:03:22.570851 2480 kubelet.go:386] "Adding apiserver pod source" Jan 22 01:03:22.570910 kubelet[2480]: I0122 01:03:22.570881 2480 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 01:03:22.588618 kubelet[2480]: I0122 01:03:22.588541 2480 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 01:03:22.588746 kubelet[2480]: E0122 01:03:22.588636 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 01:03:22.589263 kubelet[2480]: I0122 01:03:22.589168 2480 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 01:03:22.590937 kubelet[2480]: W0122 01:03:22.590760 2480 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 22 01:03:22.593798 kubelet[2480]: E0122 01:03:22.593132 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 01:03:22.601289 kubelet[2480]: I0122 01:03:22.600337 2480 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 01:03:22.601289 kubelet[2480]: I0122 01:03:22.600509 2480 server.go:1289] "Started kubelet" Jan 22 01:03:22.601289 kubelet[2480]: I0122 01:03:22.600883 2480 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 01:03:22.609792 kubelet[2480]: I0122 01:03:22.609633 2480 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 01:03:22.610583 kubelet[2480]: I0122 01:03:22.610258 2480 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 01:03:22.612537 kubelet[2480]: I0122 01:03:22.612469 2480 server.go:317] "Adding debug handlers to kubelet server" Jan 22 01:03:22.614316 kubelet[2480]: E0122 01:03:22.611264 2480 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ce7f7361b737a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-22 01:03:22.60046937 +0000 UTC m=+0.700544033,LastTimestamp:2026-01-22 01:03:22.60046937 +0000 UTC m=+0.700544033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 22 01:03:22.615826 kubelet[2480]: I0122 01:03:22.615695 2480 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 01:03:22.617112 kubelet[2480]: I0122 01:03:22.616606 2480 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 01:03:22.617112 kubelet[2480]: I0122 01:03:22.617029 2480 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 01:03:22.619191 kubelet[2480]: E0122 01:03:22.619033 2480 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 22 01:03:22.620693 kubelet[2480]: E0122 01:03:22.619841 2480 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 01:03:22.620766 kubelet[2480]: E0122 01:03:22.620745 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 01:03:22.620813 kubelet[2480]: I0122 01:03:22.620789 2480 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 01:03:22.621061 kubelet[2480]: I0122 01:03:22.620869 2480 reconciler.go:26] "Reconciler: start to sync state" Jan 22 01:03:22.622218 kubelet[2480]: E0122 01:03:22.622082 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Jan 22 01:03:22.624833 kubelet[2480]: I0122 01:03:22.624789 2480 factory.go:223] Registration of the systemd container factory successfully Jan 22 01:03:22.626136 kubelet[2480]: I0122 01:03:22.624904 2480 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 01:03:22.630316 kubelet[2480]: I0122 01:03:22.629652 2480 factory.go:223] Registration of the containerd container factory successfully Jan 22 01:03:22.649000 audit[2501]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.649000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd642ad600 a2=0 a3=0 items=0 ppid=2480 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 01:03:22.654000 audit[2503]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.654000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6dbeb480 a2=0 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 01:03:22.666000 audit[2506]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.666000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdcda9d130 a2=0 a3=0 items=0 ppid=2480 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 01:03:22.671479 kubelet[2480]: I0122 01:03:22.671245 2480 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 01:03:22.671479 kubelet[2480]: I0122 01:03:22.671335 2480 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 01:03:22.671479 kubelet[2480]: I0122 01:03:22.671469 2480 state_mem.go:36] "Initialized new in-memory state store" Jan 22 01:03:22.681000 audit[2508]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.681000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe05271720 a2=0 a3=0 items=0 ppid=2480 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 01:03:22.720261 kubelet[2480]: E0122 01:03:22.720014 2480 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 22 01:03:22.782321 kubelet[2480]: I0122 01:03:22.781881 2480 policy_none.go:49] "None policy: Start" Jan 22 01:03:22.782321 kubelet[2480]: I0122 01:03:22.781915 2480 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 01:03:22.782321 kubelet[2480]: I0122 01:03:22.781930 2480 state_mem.go:35] "Initializing new in-memory state store" Jan 22 01:03:22.809000 audit[2511]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.809000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffebd6c2f10 a2=0 a3=0 items=0 ppid=2480 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 22 01:03:22.812773 kubelet[2480]: I0122 01:03:22.812725 2480 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 22 01:03:22.816000 audit[2512]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:22.816000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0666da40 a2=0 a3=0 items=0 ppid=2480 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 01:03:22.819617 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 22 01:03:22.820594 kubelet[2480]: E0122 01:03:22.820286 2480 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 22 01:03:22.819000 audit[2513]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.819000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf6000a70 a2=0 a3=0 items=0 ppid=2480 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 01:03:22.821723 kubelet[2480]: I0122 01:03:22.821172 2480 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 22 01:03:22.821723 kubelet[2480]: I0122 01:03:22.821276 2480 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 22 01:03:22.821723 kubelet[2480]: I0122 01:03:22.821538 2480 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 01:03:22.821723 kubelet[2480]: I0122 01:03:22.821560 2480 kubelet.go:2436] "Starting kubelet main sync loop" Jan 22 01:03:22.821723 kubelet[2480]: E0122 01:03:22.821643 2480 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 01:03:22.824841 kubelet[2480]: E0122 01:03:22.824731 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Jan 22 01:03:22.825000 audit[2515]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.825000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecbcf4b20 a2=0 a3=0 items=0 ppid=2480 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 01:03:22.828000 audit[2516]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:22.828000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7c5160d0 a2=0 a3=0 items=0 ppid=2480 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 01:03:22.830000 audit[2517]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:22.830000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd29c77790 a2=0 a3=0 items=0 ppid=2480 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 01:03:22.832543 kubelet[2480]: E0122 01:03:22.832507 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 01:03:22.838000 audit[2518]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:22.838000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7d8178f0 a2=0 a3=0 items=0 ppid=2480 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 01:03:22.843885 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 22 01:03:22.845000 audit[2519]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:22.845000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0b3b6210 a2=0 a3=0 items=0 ppid=2480 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:22.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 01:03:22.866674 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 22 01:03:22.871304 kubelet[2480]: E0122 01:03:22.871172 2480 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 01:03:22.871679 kubelet[2480]: I0122 01:03:22.871621 2480 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 01:03:22.871679 kubelet[2480]: I0122 01:03:22.871635 2480 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 01:03:22.873072 kubelet[2480]: I0122 01:03:22.873032 2480 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 01:03:22.875779 kubelet[2480]: E0122 01:03:22.875292 2480 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 01:03:22.875779 kubelet[2480]: E0122 01:03:22.875560 2480 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 22 01:03:22.961864 systemd[1]: Created slice kubepods-burstable-pod9ade0b32f65df1dd0d78108b09ce09ae.slice - libcontainer container kubepods-burstable-pod9ade0b32f65df1dd0d78108b09ce09ae.slice. Jan 22 01:03:22.977321 kubelet[2480]: I0122 01:03:22.976596 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 01:03:22.977321 kubelet[2480]: E0122 01:03:22.977126 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 22 01:03:22.981244 kubelet[2480]: E0122 01:03:22.980840 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:22.991729 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 22 01:03:23.003820 kubelet[2480]: E0122 01:03:23.003723 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:23.010636 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 22 01:03:23.016084 kubelet[2480]: E0122 01:03:23.015909 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:23.025349 kubelet[2480]: I0122 01:03:23.023089 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:23.025349 kubelet[2480]: I0122 01:03:23.024512 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:23.025349 kubelet[2480]: I0122 01:03:23.024544 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ade0b32f65df1dd0d78108b09ce09ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ade0b32f65df1dd0d78108b09ce09ae\") " pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:23.025349 kubelet[2480]: I0122 01:03:23.024576 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:23.025349 kubelet[2480]: I0122 01:03:23.024598 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:23.026616 kubelet[2480]: I0122 01:03:23.024631 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:23.026616 kubelet[2480]: I0122 01:03:23.024653 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:23.026616 kubelet[2480]: I0122 01:03:23.024681 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ade0b32f65df1dd0d78108b09ce09ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ade0b32f65df1dd0d78108b09ce09ae\") " pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:23.026616 kubelet[2480]: I0122 01:03:23.024720 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ade0b32f65df1dd0d78108b09ce09ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ade0b32f65df1dd0d78108b09ce09ae\") " pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:23.181916 kubelet[2480]: I0122 01:03:23.181110 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 01:03:23.181916 kubelet[2480]: E0122 01:03:23.181708 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 22 01:03:23.230328 kubelet[2480]: E0122 01:03:23.230140 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Jan 22 01:03:23.283451 kubelet[2480]: E0122 01:03:23.282860 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:23.284020 containerd[1637]: time="2026-01-22T01:03:23.283784774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ade0b32f65df1dd0d78108b09ce09ae,Namespace:kube-system,Attempt:0,}" Jan 22 01:03:23.305516 kubelet[2480]: E0122 01:03:23.305222 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:23.306158 containerd[1637]: time="2026-01-22T01:03:23.306043269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 22 01:03:23.320607 kubelet[2480]: E0122 01:03:23.317891 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:23.320841 containerd[1637]: time="2026-01-22T01:03:23.319647483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 22 01:03:23.410549 containerd[1637]: time="2026-01-22T01:03:23.410186666Z" level=info msg="connecting to shim 9b366c0f588091db24dec270a69d2af6faffec94ffdad0977f513610842e5c88" address="unix:///run/containerd/s/54fd200716c71b6aac4db98a5d39aa4800a16c50904ccce84b9dfe81017af766" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:03:23.424524 kubelet[2480]: E0122 01:03:23.423857 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 01:03:23.425209 kubelet[2480]: E0122 01:03:23.424886 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 01:03:23.462155 containerd[1637]: time="2026-01-22T01:03:23.461623758Z" level=info msg="connecting to shim b723071b9744fb56da45bd630708a51d3ebdb99b1ec5e6728a85395cba90b1c2" address="unix:///run/containerd/s/99a0b2619b17bc7a5745d27c99dcc13d6b5925f302ed90b86510d5b20bb1d23a" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:03:23.514613 systemd[1]: Started cri-containerd-9b366c0f588091db24dec270a69d2af6faffec94ffdad0977f513610842e5c88.scope - libcontainer container 9b366c0f588091db24dec270a69d2af6faffec94ffdad0977f513610842e5c88. Jan 22 01:03:23.566518 containerd[1637]: time="2026-01-22T01:03:23.565819804Z" level=info msg="connecting to shim ede7a17bf3d27cd68be3e5975d03fe907e90dc7247bd3c7d9e668bcdb731359b" address="unix:///run/containerd/s/9f08657b9ed25d74e171464fd946ca0365365c8d712f4fcc8264127169cf2c19" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:03:23.573000 audit: BPF prog-id=83 op=LOAD Jan 22 01:03:23.574000 audit: BPF prog-id=84 op=LOAD Jan 22 01:03:23.574000 audit[2541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2529 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333636633066353838303931646232346465633237306136396432 Jan 22 01:03:23.574000 audit: BPF prog-id=84 op=UNLOAD Jan 22 01:03:23.574000 audit[2541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333636633066353838303931646232346465633237306136396432 Jan 22 01:03:23.575000 audit: BPF prog-id=85 op=LOAD Jan 22 01:03:23.575000 audit[2541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2529 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333636633066353838303931646232346465633237306136396432 Jan 22 01:03:23.576000 audit: BPF prog-id=86 op=LOAD Jan 22 01:03:23.576000 audit[2541]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2529 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333636633066353838303931646232346465633237306136396432 Jan 22 01:03:23.576000 audit: BPF prog-id=86 op=UNLOAD Jan 22 01:03:23.576000 audit[2541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333636633066353838303931646232346465633237306136396432 Jan 22 01:03:23.576000 audit: BPF prog-id=85 op=UNLOAD Jan 22 01:03:23.576000 audit[2541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333636633066353838303931646232346465633237306136396432 Jan 22 01:03:23.577000 audit: BPF prog-id=87 op=LOAD Jan 22 01:03:23.577000 audit[2541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2529 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333636633066353838303931646232346465633237306136396432 Jan 22 01:03:23.583777 kubelet[2480]: I0122 01:03:23.583658 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 01:03:23.584152 kubelet[2480]: E0122 01:03:23.584113 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 22 01:03:23.684184 systemd[1]: Started cri-containerd-b723071b9744fb56da45bd630708a51d3ebdb99b1ec5e6728a85395cba90b1c2.scope - libcontainer container b723071b9744fb56da45bd630708a51d3ebdb99b1ec5e6728a85395cba90b1c2. Jan 22 01:03:23.719880 systemd[1]: Started cri-containerd-ede7a17bf3d27cd68be3e5975d03fe907e90dc7247bd3c7d9e668bcdb731359b.scope - libcontainer container ede7a17bf3d27cd68be3e5975d03fe907e90dc7247bd3c7d9e668bcdb731359b. Jan 22 01:03:23.730083 containerd[1637]: time="2026-01-22T01:03:23.730047424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ade0b32f65df1dd0d78108b09ce09ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b366c0f588091db24dec270a69d2af6faffec94ffdad0977f513610842e5c88\"" Jan 22 01:03:23.735016 kubelet[2480]: E0122 01:03:23.733906 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:23.754300 containerd[1637]: time="2026-01-22T01:03:23.754064164Z" level=info msg="CreateContainer within sandbox \"9b366c0f588091db24dec270a69d2af6faffec94ffdad0977f513610842e5c88\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 22 01:03:23.790000 audit: BPF prog-id=88 op=LOAD Jan 22 01:03:23.791000 audit: BPF prog-id=89 op=LOAD Jan 22 01:03:23.791000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2576 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653761313762663364323763643638626533653539373564303366 Jan 22 01:03:23.791000 audit: BPF prog-id=89 op=UNLOAD Jan 22 01:03:23.791000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653761313762663364323763643638626533653539373564303366 Jan 22 01:03:23.791000 audit: BPF prog-id=90 op=LOAD Jan 22 01:03:23.791000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2576 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653761313762663364323763643638626533653539373564303366 Jan 22 01:03:23.791000 audit: BPF prog-id=91 op=LOAD Jan 22 01:03:23.791000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2576 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653761313762663364323763643638626533653539373564303366 Jan 22 01:03:23.791000 audit: BPF prog-id=91 op=UNLOAD Jan 22 01:03:23.791000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653761313762663364323763643638626533653539373564303366 Jan 22 01:03:23.791000 audit: BPF prog-id=90 op=UNLOAD Jan 22 01:03:23.791000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653761313762663364323763643638626533653539373564303366 Jan 22 01:03:23.791000 audit: BPF prog-id=92 op=LOAD Jan 22 01:03:23.791000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2576 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653761313762663364323763643638626533653539373564303366 Jan 22 01:03:23.804000 audit: BPF prog-id=93 op=LOAD Jan 22 01:03:23.811770 containerd[1637]: time="2026-01-22T01:03:23.811720623Z" level=info msg="Container 57e344bff7120bc667ecc6628962f83b8959ea5f8caffa603c312d3a23fc7d21: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:03:23.810000 audit: BPF prog-id=94 op=LOAD Jan 22 01:03:23.810000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2552 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237323330373162393734346662353664613435626436333037303861 Jan 22 01:03:23.810000 audit: BPF prog-id=94 op=UNLOAD Jan 22 01:03:23.810000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2552 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237323330373162393734346662353664613435626436333037303861 Jan 22 01:03:23.810000 audit: BPF prog-id=95 op=LOAD Jan 22 01:03:23.810000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2552 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237323330373162393734346662353664613435626436333037303861 Jan 22 01:03:23.812000 audit: BPF prog-id=96 op=LOAD Jan 22 01:03:23.812000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2552 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237323330373162393734346662353664613435626436333037303861 Jan 22 01:03:23.812000 audit: BPF prog-id=96 op=UNLOAD Jan 22 01:03:23.812000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2552 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237323330373162393734346662353664613435626436333037303861 Jan 22 01:03:23.812000 audit: BPF prog-id=95 op=UNLOAD Jan 22 01:03:23.812000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2552 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237323330373162393734346662353664613435626436333037303861 Jan 22 01:03:23.814000 audit: BPF prog-id=97 op=LOAD Jan 22 01:03:23.814000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2552 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237323330373162393734346662353664613435626436333037303861 Jan 22 01:03:23.829667 kubelet[2480]: E0122 01:03:23.829172 2480 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ce7f7361b737a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-22 01:03:22.60046937 +0000 UTC m=+0.700544033,LastTimestamp:2026-01-22 01:03:22.60046937 +0000 UTC m=+0.700544033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 22 01:03:23.836792 containerd[1637]: time="2026-01-22T01:03:23.836751031Z" level=info msg="CreateContainer within sandbox \"9b366c0f588091db24dec270a69d2af6faffec94ffdad0977f513610842e5c88\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57e344bff7120bc667ecc6628962f83b8959ea5f8caffa603c312d3a23fc7d21\"" Jan 22 01:03:23.841341 containerd[1637]: time="2026-01-22T01:03:23.841276890Z" level=info msg="StartContainer for \"57e344bff7120bc667ecc6628962f83b8959ea5f8caffa603c312d3a23fc7d21\"" Jan 22 01:03:23.843235 containerd[1637]: time="2026-01-22T01:03:23.843056992Z" level=info msg="connecting to shim 57e344bff7120bc667ecc6628962f83b8959ea5f8caffa603c312d3a23fc7d21" address="unix:///run/containerd/s/54fd200716c71b6aac4db98a5d39aa4800a16c50904ccce84b9dfe81017af766" protocol=ttrpc version=3 Jan 22 01:03:23.887642 containerd[1637]: time="2026-01-22T01:03:23.887336187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ede7a17bf3d27cd68be3e5975d03fe907e90dc7247bd3c7d9e668bcdb731359b\"" Jan 22 01:03:23.896553 kubelet[2480]: E0122 01:03:23.896477 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:23.896948 systemd[1]: Started cri-containerd-57e344bff7120bc667ecc6628962f83b8959ea5f8caffa603c312d3a23fc7d21.scope - libcontainer container 57e344bff7120bc667ecc6628962f83b8959ea5f8caffa603c312d3a23fc7d21. Jan 22 01:03:23.911705 containerd[1637]: time="2026-01-22T01:03:23.911645911Z" level=info msg="CreateContainer within sandbox \"ede7a17bf3d27cd68be3e5975d03fe907e90dc7247bd3c7d9e668bcdb731359b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 22 01:03:23.934583 containerd[1637]: time="2026-01-22T01:03:23.934543272Z" level=info msg="Container 6efb72d2f12e93c844d24996e969449f06424b444b5925bd585fabd0d0209ba9: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:03:23.937000 audit: BPF prog-id=98 op=LOAD Jan 22 01:03:23.938000 audit: BPF prog-id=99 op=LOAD Jan 22 01:03:23.938000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2529 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653334346266663731323062633636376563633636323839363266 Jan 22 01:03:23.938000 audit: BPF prog-id=99 op=UNLOAD Jan 22 01:03:23.938000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653334346266663731323062633636376563633636323839363266 Jan 22 01:03:23.939000 audit: BPF prog-id=100 op=LOAD Jan 22 01:03:23.939000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2529 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653334346266663731323062633636376563633636323839363266 Jan 22 01:03:23.940000 audit: BPF prog-id=101 op=LOAD Jan 22 01:03:23.940000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2529 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653334346266663731323062633636376563633636323839363266 Jan 22 01:03:23.941000 audit: BPF prog-id=101 op=UNLOAD Jan 22 01:03:23.941000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.943750 containerd[1637]: time="2026-01-22T01:03:23.942872828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b723071b9744fb56da45bd630708a51d3ebdb99b1ec5e6728a85395cba90b1c2\"" Jan 22 01:03:23.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653334346266663731323062633636376563633636323839363266 Jan 22 01:03:23.942000 audit: BPF prog-id=100 op=UNLOAD Jan 22 01:03:23.942000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653334346266663731323062633636376563633636323839363266 Jan 22 01:03:23.943000 audit: BPF prog-id=102 op=LOAD Jan 22 01:03:23.943000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2529 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:23.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653334346266663731323062633636376563633636323839363266 Jan 22 01:03:23.947130 kubelet[2480]: E0122 01:03:23.946887 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:23.959189 containerd[1637]: time="2026-01-22T01:03:23.958853474Z" level=info msg="CreateContainer within sandbox \"b723071b9744fb56da45bd630708a51d3ebdb99b1ec5e6728a85395cba90b1c2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 22 01:03:23.966805 containerd[1637]: time="2026-01-22T01:03:23.966677926Z" level=info msg="CreateContainer within sandbox \"ede7a17bf3d27cd68be3e5975d03fe907e90dc7247bd3c7d9e668bcdb731359b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6efb72d2f12e93c844d24996e969449f06424b444b5925bd585fabd0d0209ba9\"" Jan 22 01:03:23.967643 containerd[1637]: time="2026-01-22T01:03:23.967561049Z" level=info msg="StartContainer for \"6efb72d2f12e93c844d24996e969449f06424b444b5925bd585fabd0d0209ba9\"" Jan 22 01:03:23.976513 containerd[1637]: time="2026-01-22T01:03:23.973478552Z" level=info msg="connecting to shim 6efb72d2f12e93c844d24996e969449f06424b444b5925bd585fabd0d0209ba9" address="unix:///run/containerd/s/9f08657b9ed25d74e171464fd946ca0365365c8d712f4fcc8264127169cf2c19" protocol=ttrpc version=3 Jan 22 01:03:23.984687 containerd[1637]: time="2026-01-22T01:03:23.984337244Z" level=info msg="Container 3a1fca5157f239af5c240bed2b14694b3f12c6ccb3bd4bf49c0117abbc507c80: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:03:24.016258 containerd[1637]: time="2026-01-22T01:03:24.016212392Z" level=info msg="CreateContainer within sandbox \"b723071b9744fb56da45bd630708a51d3ebdb99b1ec5e6728a85395cba90b1c2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a1fca5157f239af5c240bed2b14694b3f12c6ccb3bd4bf49c0117abbc507c80\"" Jan 22 01:03:24.017948 containerd[1637]: time="2026-01-22T01:03:24.017860655Z" level=info msg="StartContainer for \"3a1fca5157f239af5c240bed2b14694b3f12c6ccb3bd4bf49c0117abbc507c80\"" Jan 22 01:03:24.022194 containerd[1637]: time="2026-01-22T01:03:24.022114365Z" level=info msg="connecting to shim 3a1fca5157f239af5c240bed2b14694b3f12c6ccb3bd4bf49c0117abbc507c80" address="unix:///run/containerd/s/99a0b2619b17bc7a5745d27c99dcc13d6b5925f302ed90b86510d5b20bb1d23a" protocol=ttrpc version=3 Jan 22 01:03:24.031718 kubelet[2480]: E0122 01:03:24.031507 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Jan 22 01:03:24.034186 systemd[1]: Started cri-containerd-6efb72d2f12e93c844d24996e969449f06424b444b5925bd585fabd0d0209ba9.scope - libcontainer container 6efb72d2f12e93c844d24996e969449f06424b444b5925bd585fabd0d0209ba9. Jan 22 01:03:24.050941 containerd[1637]: time="2026-01-22T01:03:24.050896399Z" level=info msg="StartContainer for \"57e344bff7120bc667ecc6628962f83b8959ea5f8caffa603c312d3a23fc7d21\" returns successfully" Jan 22 01:03:24.075958 systemd[1]: Started cri-containerd-3a1fca5157f239af5c240bed2b14694b3f12c6ccb3bd4bf49c0117abbc507c80.scope - libcontainer container 3a1fca5157f239af5c240bed2b14694b3f12c6ccb3bd4bf49c0117abbc507c80. Jan 22 01:03:24.081000 audit: BPF prog-id=103 op=LOAD Jan 22 01:03:24.081000 audit: BPF prog-id=104 op=LOAD Jan 22 01:03:24.081000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237326432663132653933633834346432343939366539363934 Jan 22 01:03:24.082000 audit: BPF prog-id=104 op=UNLOAD Jan 22 01:03:24.082000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237326432663132653933633834346432343939366539363934 Jan 22 01:03:24.082000 audit: BPF prog-id=105 op=LOAD Jan 22 01:03:24.082000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237326432663132653933633834346432343939366539363934 Jan 22 01:03:24.083000 audit: BPF prog-id=106 op=LOAD Jan 22 01:03:24.083000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237326432663132653933633834346432343939366539363934 Jan 22 01:03:24.084000 audit: BPF prog-id=106 op=UNLOAD Jan 22 01:03:24.084000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237326432663132653933633834346432343939366539363934 Jan 22 01:03:24.084000 audit: BPF prog-id=105 op=UNLOAD Jan 22 01:03:24.084000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237326432663132653933633834346432343939366539363934 Jan 22 01:03:24.084000 audit: BPF prog-id=107 op=LOAD Jan 22 01:03:24.084000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2576 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665666237326432663132653933633834346432343939366539363934 Jan 22 01:03:24.120000 audit: BPF prog-id=108 op=LOAD Jan 22 01:03:24.121000 audit: BPF prog-id=109 op=LOAD Jan 22 01:03:24.121000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2552 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316663613531353766323339616635633234306265643262313436 Jan 22 01:03:24.122000 audit: BPF prog-id=109 op=UNLOAD Jan 22 01:03:24.122000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2552 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316663613531353766323339616635633234306265643262313436 Jan 22 01:03:24.125000 audit: BPF prog-id=110 op=LOAD Jan 22 01:03:24.125000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2552 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316663613531353766323339616635633234306265643262313436 Jan 22 01:03:24.126000 audit: BPF prog-id=111 op=LOAD Jan 22 01:03:24.126000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2552 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316663613531353766323339616635633234306265643262313436 Jan 22 01:03:24.126000 audit: BPF prog-id=111 op=UNLOAD Jan 22 01:03:24.126000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2552 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316663613531353766323339616635633234306265643262313436 Jan 22 01:03:24.127000 audit: BPF prog-id=110 op=UNLOAD Jan 22 01:03:24.127000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2552 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316663613531353766323339616635633234306265643262313436 Jan 22 01:03:24.128000 audit: BPF prog-id=112 op=LOAD Jan 22 01:03:24.128000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2552 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:24.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316663613531353766323339616635633234306265643262313436 Jan 22 01:03:24.157838 kubelet[2480]: E0122 01:03:24.157729 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 01:03:24.213029 containerd[1637]: time="2026-01-22T01:03:24.208916463Z" level=info msg="StartContainer for \"6efb72d2f12e93c844d24996e969449f06424b444b5925bd585fabd0d0209ba9\" returns successfully" Jan 22 01:03:24.235754 containerd[1637]: time="2026-01-22T01:03:24.235355713Z" level=info msg="StartContainer for \"3a1fca5157f239af5c240bed2b14694b3f12c6ccb3bd4bf49c0117abbc507c80\" returns successfully" Jan 22 01:03:24.390301 kubelet[2480]: I0122 01:03:24.390033 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 01:03:24.844963 kubelet[2480]: E0122 01:03:24.844857 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:24.845189 kubelet[2480]: E0122 01:03:24.845097 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:24.856485 kubelet[2480]: E0122 01:03:24.854351 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:24.857827 kubelet[2480]: E0122 01:03:24.857163 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:24.865204 kubelet[2480]: E0122 01:03:24.865115 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:24.865579 kubelet[2480]: E0122 01:03:24.865321 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:25.866343 kubelet[2480]: E0122 01:03:25.866107 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:25.867122 kubelet[2480]: E0122 01:03:25.866549 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:25.867122 kubelet[2480]: E0122 01:03:25.867062 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 01:03:25.867251 kubelet[2480]: E0122 01:03:25.867215 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:26.482119 kubelet[2480]: E0122 01:03:26.481898 2480 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 22 01:03:26.655301 kubelet[2480]: I0122 01:03:26.655120 2480 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 22 01:03:26.721752 kubelet[2480]: I0122 01:03:26.721150 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:26.873946 kubelet[2480]: I0122 01:03:26.872470 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:26.873946 kubelet[2480]: I0122 01:03:26.873329 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:26.907495 kubelet[2480]: E0122 01:03:26.906562 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:26.907495 kubelet[2480]: I0122 01:03:26.906661 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:26.907495 kubelet[2480]: E0122 01:03:26.906889 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:26.907495 kubelet[2480]: E0122 01:03:26.907167 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:26.910201 kubelet[2480]: E0122 01:03:26.909837 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:26.910201 kubelet[2480]: E0122 01:03:26.910110 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:26.919202 kubelet[2480]: E0122 01:03:26.918989 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:26.919202 kubelet[2480]: I0122 01:03:26.919142 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:26.967753 kubelet[2480]: E0122 01:03:26.967671 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:27.592881 kubelet[2480]: I0122 01:03:27.592359 2480 apiserver.go:52] "Watching apiserver" Jan 22 01:03:27.621908 kubelet[2480]: I0122 01:03:27.621735 2480 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 01:03:28.056076 kubelet[2480]: I0122 01:03:28.055828 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:28.065849 kubelet[2480]: E0122 01:03:28.065803 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:28.890801 kubelet[2480]: E0122 01:03:28.890682 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:29.382806 systemd[1]: Reload requested from client PID 2769 ('systemctl') (unit session-7.scope)... Jan 22 01:03:29.382885 systemd[1]: Reloading... Jan 22 01:03:29.532552 zram_generator::config[2818]: No configuration found. Jan 22 01:03:29.873307 systemd[1]: Reloading finished in 489 ms. Jan 22 01:03:29.927851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:03:29.941947 systemd[1]: kubelet.service: Deactivated successfully. Jan 22 01:03:29.942755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:03:29.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:29.942952 systemd[1]: kubelet.service: Consumed 1.636s CPU time, 133.8M memory peak. Jan 22 01:03:29.946034 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 22 01:03:29.946154 kernel: audit: type=1131 audit(1769043809.941:385): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:29.947231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 01:03:29.949000 audit: BPF prog-id=113 op=LOAD Jan 22 01:03:29.964006 kernel: audit: type=1334 audit(1769043809.949:386): prog-id=113 op=LOAD Jan 22 01:03:29.964164 kernel: audit: type=1334 audit(1769043809.949:387): prog-id=114 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=114 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=72 op=UNLOAD Jan 22 01:03:29.974471 kernel: audit: type=1334 audit(1769043809.949:388): prog-id=72 op=UNLOAD Jan 22 01:03:29.949000 audit: BPF prog-id=73 op=UNLOAD Jan 22 01:03:29.979621 kernel: audit: type=1334 audit(1769043809.949:389): prog-id=73 op=UNLOAD Jan 22 01:03:29.979694 kernel: audit: type=1334 audit(1769043809.949:390): prog-id=115 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=115 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=80 op=UNLOAD Jan 22 01:03:29.990341 kernel: audit: type=1334 audit(1769043809.949:391): prog-id=80 op=UNLOAD Jan 22 01:03:29.990512 kernel: audit: type=1334 audit(1769043809.949:392): prog-id=116 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=116 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=117 op=LOAD Jan 22 01:03:29.999825 kernel: audit: type=1334 audit(1769043809.949:393): prog-id=117 op=LOAD Jan 22 01:03:29.999976 kernel: audit: type=1334 audit(1769043809.949:394): prog-id=81 op=UNLOAD Jan 22 01:03:29.949000 audit: BPF prog-id=81 op=UNLOAD Jan 22 01:03:29.949000 audit: BPF prog-id=82 op=UNLOAD Jan 22 01:03:29.949000 audit: BPF prog-id=118 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=78 op=UNLOAD Jan 22 01:03:29.949000 audit: BPF prog-id=119 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=75 op=UNLOAD Jan 22 01:03:29.949000 audit: BPF prog-id=120 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=121 op=LOAD Jan 22 01:03:29.949000 audit: BPF prog-id=76 op=UNLOAD Jan 22 01:03:29.949000 audit: BPF prog-id=77 op=UNLOAD Jan 22 01:03:29.954000 audit: BPF prog-id=122 op=LOAD Jan 22 01:03:29.954000 audit: BPF prog-id=63 op=UNLOAD Jan 22 01:03:29.954000 audit: BPF prog-id=123 op=LOAD Jan 22 01:03:29.954000 audit: BPF prog-id=124 op=LOAD Jan 22 01:03:29.954000 audit: BPF prog-id=64 op=UNLOAD Jan 22 01:03:29.954000 audit: BPF prog-id=65 op=UNLOAD Jan 22 01:03:29.954000 audit: BPF prog-id=125 op=LOAD Jan 22 01:03:29.954000 audit: BPF prog-id=79 op=UNLOAD Jan 22 01:03:29.957000 audit: BPF prog-id=126 op=LOAD Jan 22 01:03:29.957000 audit: BPF prog-id=69 op=UNLOAD Jan 22 01:03:29.957000 audit: BPF prog-id=127 op=LOAD Jan 22 01:03:30.008000 audit: BPF prog-id=128 op=LOAD Jan 22 01:03:30.008000 audit: BPF prog-id=70 op=UNLOAD Jan 22 01:03:30.008000 audit: BPF prog-id=71 op=UNLOAD Jan 22 01:03:30.015000 audit: BPF prog-id=129 op=LOAD Jan 22 01:03:30.015000 audit: BPF prog-id=66 op=UNLOAD Jan 22 01:03:30.015000 audit: BPF prog-id=130 op=LOAD Jan 22 01:03:30.015000 audit: BPF prog-id=131 op=LOAD Jan 22 01:03:30.015000 audit: BPF prog-id=67 op=UNLOAD Jan 22 01:03:30.015000 audit: BPF prog-id=68 op=UNLOAD Jan 22 01:03:30.016000 audit: BPF prog-id=132 op=LOAD Jan 22 01:03:30.016000 audit: BPF prog-id=74 op=UNLOAD Jan 22 01:03:30.301318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 01:03:30.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:30.321120 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 01:03:30.480756 kubelet[2860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 01:03:30.480756 kubelet[2860]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 01:03:30.480756 kubelet[2860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 01:03:30.481510 kubelet[2860]: I0122 01:03:30.480780 2860 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 01:03:30.498492 kubelet[2860]: I0122 01:03:30.497933 2860 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 22 01:03:30.498492 kubelet[2860]: I0122 01:03:30.497968 2860 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 01:03:30.498492 kubelet[2860]: I0122 01:03:30.498331 2860 server.go:956] "Client rotation is on, will bootstrap in background" Jan 22 01:03:30.500778 kubelet[2860]: I0122 01:03:30.500755 2860 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 22 01:03:30.504603 kubelet[2860]: I0122 01:03:30.504583 2860 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 01:03:30.521906 kubelet[2860]: I0122 01:03:30.521707 2860 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 01:03:30.532325 kubelet[2860]: I0122 01:03:30.531856 2860 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 22 01:03:30.533204 kubelet[2860]: I0122 01:03:30.532959 2860 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 01:03:30.533320 kubelet[2860]: I0122 01:03:30.533121 2860 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 01:03:30.533753 kubelet[2860]: I0122 01:03:30.533332 2860 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 01:03:30.533915 kubelet[2860]: I0122 01:03:30.533706 2860 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 01:03:30.537190 kubelet[2860]: I0122 01:03:30.535878 2860 state_mem.go:36] "Initialized new in-memory state store" Jan 22 01:03:30.537190 kubelet[2860]: I0122 01:03:30.536236 2860 kubelet.go:480] "Attempting to sync node with API server" Jan 22 01:03:30.537190 kubelet[2860]: I0122 01:03:30.536257 2860 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 01:03:30.537190 kubelet[2860]: I0122 01:03:30.536290 2860 kubelet.go:386] "Adding apiserver pod source" Jan 22 01:03:30.537190 kubelet[2860]: I0122 01:03:30.536304 2860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 01:03:30.543321 kubelet[2860]: I0122 01:03:30.541829 2860 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 01:03:30.543321 kubelet[2860]: I0122 01:03:30.542624 2860 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 01:03:30.567343 kubelet[2860]: I0122 01:03:30.566966 2860 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 01:03:30.567343 kubelet[2860]: I0122 01:03:30.567186 2860 server.go:1289] "Started kubelet" Jan 22 01:03:30.574037 kubelet[2860]: I0122 01:03:30.573922 2860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 01:03:30.582631 kubelet[2860]: I0122 01:03:30.582590 2860 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 01:03:30.586615 kubelet[2860]: I0122 01:03:30.586591 2860 server.go:317] "Adding debug handlers to kubelet server" Jan 22 01:03:30.596321 kubelet[2860]: I0122 01:03:30.595885 2860 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 01:03:30.603964 kubelet[2860]: I0122 01:03:30.603941 2860 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 01:03:30.604686 kubelet[2860]: I0122 01:03:30.604169 2860 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 01:03:30.605275 kubelet[2860]: I0122 01:03:30.593979 2860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 01:03:30.606519 kubelet[2860]: I0122 01:03:30.606246 2860 factory.go:223] Registration of the systemd container factory successfully Jan 22 01:03:30.613148 kubelet[2860]: I0122 01:03:30.613012 2860 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 01:03:30.615904 kubelet[2860]: I0122 01:03:30.610836 2860 reconciler.go:26] "Reconciler: start to sync state" Jan 22 01:03:30.619582 kubelet[2860]: I0122 01:03:30.610285 2860 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 01:03:30.621514 kubelet[2860]: E0122 01:03:30.620999 2860 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 01:03:30.637790 kubelet[2860]: I0122 01:03:30.636484 2860 factory.go:223] Registration of the containerd container factory successfully Jan 22 01:03:30.678687 kubelet[2860]: I0122 01:03:30.678645 2860 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 22 01:03:30.716928 kubelet[2860]: I0122 01:03:30.715534 2860 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 22 01:03:30.717542 kubelet[2860]: I0122 01:03:30.716727 2860 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 22 01:03:30.719471 kubelet[2860]: I0122 01:03:30.718525 2860 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 01:03:30.719471 kubelet[2860]: I0122 01:03:30.718660 2860 kubelet.go:2436] "Starting kubelet main sync loop" Jan 22 01:03:30.719471 kubelet[2860]: E0122 01:03:30.719144 2860 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 01:03:30.808225 kubelet[2860]: I0122 01:03:30.808126 2860 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 01:03:30.808225 kubelet[2860]: I0122 01:03:30.808192 2860 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 01:03:30.808225 kubelet[2860]: I0122 01:03:30.808216 2860 state_mem.go:36] "Initialized new in-memory state store" Jan 22 01:03:30.808590 kubelet[2860]: I0122 01:03:30.808526 2860 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 22 01:03:30.808590 kubelet[2860]: I0122 01:03:30.808543 2860 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 22 01:03:30.808590 kubelet[2860]: I0122 01:03:30.808565 2860 policy_none.go:49] "None policy: Start" Jan 22 01:03:30.808590 kubelet[2860]: I0122 01:03:30.808582 2860 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 01:03:30.808699 kubelet[2860]: I0122 01:03:30.808600 2860 state_mem.go:35] "Initializing new in-memory state store" Jan 22 01:03:30.810848 kubelet[2860]: I0122 01:03:30.809569 2860 state_mem.go:75] "Updated machine memory state" Jan 22 01:03:30.820277 kubelet[2860]: E0122 01:03:30.819686 2860 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 22 01:03:30.834272 kubelet[2860]: E0122 01:03:30.833922 2860 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 01:03:30.835468 kubelet[2860]: I0122 01:03:30.835311 2860 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 01:03:30.835468 kubelet[2860]: I0122 01:03:30.835331 2860 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 01:03:30.836318 kubelet[2860]: I0122 01:03:30.835836 2860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 01:03:30.847665 kubelet[2860]: E0122 01:03:30.847263 2860 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 01:03:30.985313 kubelet[2860]: I0122 01:03:30.984856 2860 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 01:03:31.014819 kubelet[2860]: I0122 01:03:31.014717 2860 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 22 01:03:31.015169 kubelet[2860]: I0122 01:03:31.014887 2860 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 22 01:03:31.026015 kubelet[2860]: I0122 01:03:31.025770 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:31.027635 kubelet[2860]: I0122 01:03:31.027536 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:31.032205 kubelet[2860]: I0122 01:03:31.031621 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:31.051168 kubelet[2860]: E0122 01:03:31.051012 2860 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:31.122813 kubelet[2860]: I0122 01:03:31.122532 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ade0b32f65df1dd0d78108b09ce09ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ade0b32f65df1dd0d78108b09ce09ae\") " pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:31.122813 kubelet[2860]: I0122 01:03:31.122717 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:31.122813 kubelet[2860]: I0122 01:03:31.122756 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:31.122813 kubelet[2860]: I0122 01:03:31.122782 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ade0b32f65df1dd0d78108b09ce09ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ade0b32f65df1dd0d78108b09ce09ae\") " pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:31.122813 kubelet[2860]: I0122 01:03:31.122805 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:31.123031 kubelet[2860]: I0122 01:03:31.122827 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:31.123031 kubelet[2860]: I0122 01:03:31.122854 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 01:03:31.123031 kubelet[2860]: I0122 01:03:31.122876 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 22 01:03:31.123209 kubelet[2860]: I0122 01:03:31.123191 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ade0b32f65df1dd0d78108b09ce09ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ade0b32f65df1dd0d78108b09ce09ae\") " pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:31.349166 kubelet[2860]: E0122 01:03:31.348990 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:31.350276 kubelet[2860]: E0122 01:03:31.350141 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:31.352561 kubelet[2860]: E0122 01:03:31.352353 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:31.538756 kubelet[2860]: I0122 01:03:31.538679 2860 apiserver.go:52] "Watching apiserver" Jan 22 01:03:31.605506 kubelet[2860]: I0122 01:03:31.605237 2860 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 01:03:31.669571 kubelet[2860]: I0122 01:03:31.669306 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.669285472 podStartE2EDuration="3.669285472s" podCreationTimestamp="2026-01-22 01:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 01:03:31.638333928 +0000 UTC m=+1.295527107" watchObservedRunningTime="2026-01-22 01:03:31.669285472 +0000 UTC m=+1.326478661" Jan 22 01:03:31.723128 kubelet[2860]: I0122 01:03:31.722863 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.722849846 podStartE2EDuration="722.849846ms" podCreationTimestamp="2026-01-22 01:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 01:03:31.673356983 +0000 UTC m=+1.330550183" watchObservedRunningTime="2026-01-22 01:03:31.722849846 +0000 UTC m=+1.380043015" Jan 22 01:03:31.723128 kubelet[2860]: I0122 01:03:31.723014 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.723008752 podStartE2EDuration="723.008752ms" podCreationTimestamp="2026-01-22 01:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 01:03:31.715815903 +0000 UTC m=+1.373009082" watchObservedRunningTime="2026-01-22 01:03:31.723008752 +0000 UTC m=+1.380201941" Jan 22 01:03:31.810497 kubelet[2860]: I0122 01:03:31.808668 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:31.810497 kubelet[2860]: E0122 01:03:31.809648 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:31.811686 kubelet[2860]: E0122 01:03:31.811622 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:31.837690 kubelet[2860]: E0122 01:03:31.837575 2860 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 22 01:03:31.838011 kubelet[2860]: E0122 01:03:31.837865 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:32.810995 kubelet[2860]: E0122 01:03:32.810848 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:32.811654 kubelet[2860]: E0122 01:03:32.811345 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:34.006216 kubelet[2860]: I0122 01:03:34.006050 2860 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 22 01:03:34.008205 containerd[1637]: time="2026-01-22T01:03:34.007966876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 22 01:03:34.008903 kubelet[2860]: I0122 01:03:34.008529 2860 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 22 01:03:34.928823 kubelet[2860]: E0122 01:03:34.928702 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:35.010208 systemd[1]: Created slice kubepods-besteffort-pod7d5f1bbe_9db7_4d00_8744_72d4b7255137.slice - libcontainer container kubepods-besteffort-pod7d5f1bbe_9db7_4d00_8744_72d4b7255137.slice. Jan 22 01:03:35.062023 kubelet[2860]: I0122 01:03:35.061331 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d5f1bbe-9db7-4d00-8744-72d4b7255137-xtables-lock\") pod \"kube-proxy-xtxs7\" (UID: \"7d5f1bbe-9db7-4d00-8744-72d4b7255137\") " pod="kube-system/kube-proxy-xtxs7" Jan 22 01:03:35.062851 kubelet[2860]: I0122 01:03:35.062288 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d5f1bbe-9db7-4d00-8744-72d4b7255137-lib-modules\") pod \"kube-proxy-xtxs7\" (UID: \"7d5f1bbe-9db7-4d00-8744-72d4b7255137\") " pod="kube-system/kube-proxy-xtxs7" Jan 22 01:03:35.062851 kubelet[2860]: I0122 01:03:35.062323 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rll5\" (UniqueName: \"kubernetes.io/projected/7d5f1bbe-9db7-4d00-8744-72d4b7255137-kube-api-access-9rll5\") pod \"kube-proxy-xtxs7\" (UID: \"7d5f1bbe-9db7-4d00-8744-72d4b7255137\") " pod="kube-system/kube-proxy-xtxs7" Jan 22 01:03:35.062851 kubelet[2860]: I0122 01:03:35.062351 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d5f1bbe-9db7-4d00-8744-72d4b7255137-kube-proxy\") pod \"kube-proxy-xtxs7\" (UID: \"7d5f1bbe-9db7-4d00-8744-72d4b7255137\") " pod="kube-system/kube-proxy-xtxs7" Jan 22 01:03:35.269054 systemd[1]: Created slice kubepods-besteffort-pod4deb3c41_ca49_4928_a083_04359103c4c7.slice - libcontainer container kubepods-besteffort-pod4deb3c41_ca49_4928_a083_04359103c4c7.slice. Jan 22 01:03:35.327954 kubelet[2860]: E0122 01:03:35.327594 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:35.329031 containerd[1637]: time="2026-01-22T01:03:35.328903063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xtxs7,Uid:7d5f1bbe-9db7-4d00-8744-72d4b7255137,Namespace:kube-system,Attempt:0,}" Jan 22 01:03:35.365301 kubelet[2860]: I0122 01:03:35.365070 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9w2h\" (UniqueName: \"kubernetes.io/projected/4deb3c41-ca49-4928-a083-04359103c4c7-kube-api-access-k9w2h\") pod \"tigera-operator-7dcd859c48-26cww\" (UID: \"4deb3c41-ca49-4928-a083-04359103c4c7\") " pod="tigera-operator/tigera-operator-7dcd859c48-26cww" Jan 22 01:03:35.365301 kubelet[2860]: I0122 01:03:35.365244 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4deb3c41-ca49-4928-a083-04359103c4c7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-26cww\" (UID: \"4deb3c41-ca49-4928-a083-04359103c4c7\") " pod="tigera-operator/tigera-operator-7dcd859c48-26cww" Jan 22 01:03:35.443734 containerd[1637]: time="2026-01-22T01:03:35.443520009Z" level=info msg="connecting to shim cb7df5b099d94d6151e34401f1a321ccb2b22442f17edb0da31caee97a8b53ed" address="unix:///run/containerd/s/425303d1243cb405e30e0fb26841fbfbc07525101da2f39a82bce31095353810" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:03:35.468598 kubelet[2860]: E0122 01:03:35.468550 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:35.567773 systemd[1]: Started cri-containerd-cb7df5b099d94d6151e34401f1a321ccb2b22442f17edb0da31caee97a8b53ed.scope - libcontainer container cb7df5b099d94d6151e34401f1a321ccb2b22442f17edb0da31caee97a8b53ed. Jan 22 01:03:35.583476 containerd[1637]: time="2026-01-22T01:03:35.583034440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-26cww,Uid:4deb3c41-ca49-4928-a083-04359103c4c7,Namespace:tigera-operator,Attempt:0,}" Jan 22 01:03:35.639549 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 22 01:03:35.639686 kernel: audit: type=1334 audit(1769043815.626:427): prog-id=133 op=LOAD Jan 22 01:03:35.626000 audit: BPF prog-id=133 op=LOAD Jan 22 01:03:35.627000 audit: BPF prog-id=134 op=LOAD Jan 22 01:03:35.647618 kernel: audit: type=1334 audit(1769043815.627:428): prog-id=134 op=LOAD Jan 22 01:03:35.627000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.660708 containerd[1637]: time="2026-01-22T01:03:35.660663785Z" level=info msg="connecting to shim cbe4fd037e8192a809dd1536ae9f409ef95d1ed7fca9440694db72e574a0f699" address="unix:///run/containerd/s/fc5d95524c32fa1088f89f9184fa7fe43eb8355f19d985103e60242a66f69f9e" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:03:35.670459 kernel: audit: type=1300 audit(1769043815.627:428): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.695093 kernel: audit: type=1327 audit(1769043815.627:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.695256 kernel: audit: type=1334 audit(1769043815.627:429): prog-id=134 op=UNLOAD Jan 22 01:03:35.627000 audit: BPF prog-id=134 op=UNLOAD Jan 22 01:03:35.627000 audit[2942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.725088 kernel: audit: type=1300 audit(1769043815.627:429): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.637000 audit: BPF prog-id=135 op=LOAD Jan 22 01:03:35.755880 kernel: audit: type=1327 audit(1769043815.627:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.755970 kernel: audit: type=1334 audit(1769043815.637:430): prog-id=135 op=LOAD Jan 22 01:03:35.756022 kernel: audit: type=1300 audit(1769043815.637:430): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.637000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.805045 kernel: audit: type=1327 audit(1769043815.637:430): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.637000 audit: BPF prog-id=136 op=LOAD Jan 22 01:03:35.637000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.637000 audit: BPF prog-id=136 op=UNLOAD Jan 22 01:03:35.637000 audit[2942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.637000 audit: BPF prog-id=135 op=UNLOAD Jan 22 01:03:35.637000 audit[2942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.637000 audit: BPF prog-id=137 op=LOAD Jan 22 01:03:35.637000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376466356230393964393464363135316533343430316631613332 Jan 22 01:03:35.807038 systemd[1]: Started cri-containerd-cbe4fd037e8192a809dd1536ae9f409ef95d1ed7fca9440694db72e574a0f699.scope - libcontainer container cbe4fd037e8192a809dd1536ae9f409ef95d1ed7fca9440694db72e574a0f699. Jan 22 01:03:35.812556 containerd[1637]: time="2026-01-22T01:03:35.812359699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xtxs7,Uid:7d5f1bbe-9db7-4d00-8744-72d4b7255137,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb7df5b099d94d6151e34401f1a321ccb2b22442f17edb0da31caee97a8b53ed\"" Jan 22 01:03:35.817451 kubelet[2860]: E0122 01:03:35.817318 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:35.835233 kubelet[2860]: E0122 01:03:35.834072 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:35.837059 kubelet[2860]: E0122 01:03:35.836599 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:35.853499 containerd[1637]: time="2026-01-22T01:03:35.852727976Z" level=info msg="CreateContainer within sandbox \"cb7df5b099d94d6151e34401f1a321ccb2b22442f17edb0da31caee97a8b53ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 22 01:03:35.861000 audit: BPF prog-id=138 op=LOAD Jan 22 01:03:35.864000 audit: BPF prog-id=139 op=LOAD Jan 22 01:03:35.864000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2972 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362653466643033376538313932613830396464313533366165396634 Jan 22 01:03:35.864000 audit: BPF prog-id=139 op=UNLOAD Jan 22 01:03:35.864000 audit[2982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362653466643033376538313932613830396464313533366165396634 Jan 22 01:03:35.864000 audit: BPF prog-id=140 op=LOAD Jan 22 01:03:35.864000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2972 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362653466643033376538313932613830396464313533366165396634 Jan 22 01:03:35.865000 audit: BPF prog-id=141 op=LOAD Jan 22 01:03:35.865000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2972 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362653466643033376538313932613830396464313533366165396634 Jan 22 01:03:35.865000 audit: BPF prog-id=141 op=UNLOAD Jan 22 01:03:35.865000 audit[2982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362653466643033376538313932613830396464313533366165396634 Jan 22 01:03:35.865000 audit: BPF prog-id=140 op=UNLOAD Jan 22 01:03:35.865000 audit[2982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362653466643033376538313932613830396464313533366165396634 Jan 22 01:03:35.865000 audit: BPF prog-id=142 op=LOAD Jan 22 01:03:35.865000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2972 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:35.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362653466643033376538313932613830396464313533366165396634 Jan 22 01:03:35.897492 containerd[1637]: time="2026-01-22T01:03:35.897245201Z" level=info msg="Container 229f6c985b740708ab382018135c081a5ecff5792e8d8d137bc3e5778c4fada6: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:03:35.927028 containerd[1637]: time="2026-01-22T01:03:35.926825906Z" level=info msg="CreateContainer within sandbox \"cb7df5b099d94d6151e34401f1a321ccb2b22442f17edb0da31caee97a8b53ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"229f6c985b740708ab382018135c081a5ecff5792e8d8d137bc3e5778c4fada6\"" Jan 22 01:03:35.930993 containerd[1637]: time="2026-01-22T01:03:35.930854665Z" level=info msg="StartContainer for \"229f6c985b740708ab382018135c081a5ecff5792e8d8d137bc3e5778c4fada6\"" Jan 22 01:03:35.940097 containerd[1637]: time="2026-01-22T01:03:35.940046096Z" level=info msg="connecting to shim 229f6c985b740708ab382018135c081a5ecff5792e8d8d137bc3e5778c4fada6" address="unix:///run/containerd/s/425303d1243cb405e30e0fb26841fbfbc07525101da2f39a82bce31095353810" protocol=ttrpc version=3 Jan 22 01:03:35.983902 containerd[1637]: time="2026-01-22T01:03:35.983189587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-26cww,Uid:4deb3c41-ca49-4928-a083-04359103c4c7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cbe4fd037e8192a809dd1536ae9f409ef95d1ed7fca9440694db72e574a0f699\"" Jan 22 01:03:35.995700 containerd[1637]: time="2026-01-22T01:03:35.995304387Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 22 01:03:35.997294 systemd[1]: Started cri-containerd-229f6c985b740708ab382018135c081a5ecff5792e8d8d137bc3e5778c4fada6.scope - libcontainer container 229f6c985b740708ab382018135c081a5ecff5792e8d8d137bc3e5778c4fada6. Jan 22 01:03:36.120637 kubelet[2860]: E0122 01:03:36.120507 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:36.118000 audit: BPF prog-id=143 op=LOAD Jan 22 01:03:36.118000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2931 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396636633938356237343037303861623338323031383133356330 Jan 22 01:03:36.118000 audit: BPF prog-id=144 op=LOAD Jan 22 01:03:36.118000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2931 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396636633938356237343037303861623338323031383133356330 Jan 22 01:03:36.118000 audit: BPF prog-id=144 op=UNLOAD Jan 22 01:03:36.118000 audit[3007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396636633938356237343037303861623338323031383133356330 Jan 22 01:03:36.118000 audit: BPF prog-id=143 op=UNLOAD Jan 22 01:03:36.118000 audit[3007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396636633938356237343037303861623338323031383133356330 Jan 22 01:03:36.118000 audit: BPF prog-id=145 op=LOAD Jan 22 01:03:36.118000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2931 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396636633938356237343037303861623338323031383133356330 Jan 22 01:03:36.199593 containerd[1637]: time="2026-01-22T01:03:36.199493704Z" level=info msg="StartContainer for \"229f6c985b740708ab382018135c081a5ecff5792e8d8d137bc3e5778c4fada6\" returns successfully" Jan 22 01:03:36.722000 audit[3081]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:36.722000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5060db50 a2=0 a3=7ffe5060db3c items=0 ppid=3028 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 01:03:36.732000 audit[3080]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:36.732000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8e32fe60 a2=0 a3=7ffc8e32fe4c items=0 ppid=3028 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 01:03:36.743000 audit[3087]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:36.743000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff456c3ed0 a2=0 a3=7fff456c3ebc items=0 ppid=3028 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 01:03:36.772000 audit[3088]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:36.772000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe66bd7560 a2=0 a3=7ffe66bd754c items=0 ppid=3028 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.772000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 01:03:36.782000 audit[3086]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:36.782000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff49091460 a2=0 a3=7fff4909144c items=0 ppid=3028 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 01:03:36.804000 audit[3089]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:36.804000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe80740660 a2=0 a3=7ffe8074064c items=0 ppid=3028 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 01:03:36.835000 audit[3090]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:36.835000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff8416aac0 a2=0 a3=7fff8416aaac items=0 ppid=3028 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 01:03:36.876000 audit[3092]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:36.876000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffea8ae84f0 a2=0 a3=7ffea8ae84dc items=0 ppid=3028 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 22 01:03:36.880892 kubelet[2860]: E0122 01:03:36.876997 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:36.898317 kubelet[2860]: E0122 01:03:36.897988 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:36.898317 kubelet[2860]: E0122 01:03:36.898218 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:36.960000 audit[3095]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:36.960000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffd8f94a50 a2=0 a3=7fffd8f94a3c items=0 ppid=3028 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 22 01:03:36.975692 kubelet[2860]: I0122 01:03:36.974641 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xtxs7" podStartSLOduration=2.974626056 podStartE2EDuration="2.974626056s" podCreationTimestamp="2026-01-22 01:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 01:03:36.92929018 +0000 UTC m=+6.586483359" watchObservedRunningTime="2026-01-22 01:03:36.974626056 +0000 UTC m=+6.631819236" Jan 22 01:03:36.999000 audit[3096]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:36.999000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4f318d50 a2=0 a3=7fff4f318d3c items=0 ppid=3028 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:36.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 01:03:37.019000 audit[3098]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.019000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdefe40cf0 a2=0 a3=7ffdefe40cdc items=0 ppid=3028 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 01:03:37.022000 audit[3099]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.022000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb5b6d880 a2=0 a3=7ffdb5b6d86c items=0 ppid=3028 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 01:03:37.041000 audit[3101]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.041000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0fc0d6b0 a2=0 a3=7ffc0fc0d69c items=0 ppid=3028 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 22 01:03:37.086000 audit[3104]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.086000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff053cc710 a2=0 a3=7fff053cc6fc items=0 ppid=3028 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 22 01:03:37.097000 audit[3105]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.097000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe71d16f00 a2=0 a3=7ffe71d16eec items=0 ppid=3028 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 01:03:37.110000 audit[3107]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.110000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffbe182d90 a2=0 a3=7fffbe182d7c items=0 ppid=3028 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 01:03:37.115000 audit[3108]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.115000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdee2213e0 a2=0 a3=7ffdee2213cc items=0 ppid=3028 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 01:03:37.125000 audit[3110]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.125000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeea8bd7a0 a2=0 a3=7ffeea8bd78c items=0 ppid=3028 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 01:03:37.140000 audit[3113]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.140000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd763b2d80 a2=0 a3=7ffd763b2d6c items=0 ppid=3028 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 01:03:37.169000 audit[3116]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.169000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebb19dc50 a2=0 a3=7ffebb19dc3c items=0 ppid=3028 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 22 01:03:37.173000 audit[3117]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.173000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5687b720 a2=0 a3=7fff5687b70c items=0 ppid=3028 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 01:03:37.186000 audit[3119]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.186000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffda32cf790 a2=0 a3=7ffda32cf77c items=0 ppid=3028 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 01:03:37.211000 audit[3126]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.211000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc5e1fba0 a2=0 a3=7ffdc5e1fb8c items=0 ppid=3028 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 01:03:37.216000 audit[3127]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.216000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9daac860 a2=0 a3=7fff9daac84c items=0 ppid=3028 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 01:03:37.227000 audit[3129]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 01:03:37.227000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe3e819bf0 a2=0 a3=7ffe3e819bdc items=0 ppid=3028 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 01:03:37.316224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371943235.mount: Deactivated successfully. Jan 22 01:03:37.325000 audit[3135]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:37.325000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe2dcac750 a2=0 a3=7ffe2dcac73c items=0 ppid=3028 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:37.344000 audit[3135]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:37.344000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe2dcac750 a2=0 a3=7ffe2dcac73c items=0 ppid=3028 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:37.359000 audit[3140]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.359000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe06c75170 a2=0 a3=7ffe06c7515c items=0 ppid=3028 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 01:03:37.381000 audit[3142]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.381000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeba13bb20 a2=0 a3=7ffeba13bb0c items=0 ppid=3028 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 22 01:03:37.399000 audit[3145]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.399000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffe1d249e0 a2=0 a3=7fffe1d249cc items=0 ppid=3028 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 22 01:03:37.403000 audit[3146]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.403000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe993ce170 a2=0 a3=7ffe993ce15c items=0 ppid=3028 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 01:03:37.414000 audit[3148]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.414000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb0ebbfb0 a2=0 a3=7ffdb0ebbf9c items=0 ppid=3028 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 01:03:37.427000 audit[3149]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.427000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1a04b370 a2=0 a3=7ffe1a04b35c items=0 ppid=3028 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 01:03:37.442000 audit[3151]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.442000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff8fcf5010 a2=0 a3=7fff8fcf4ffc items=0 ppid=3028 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.442000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 22 01:03:37.470000 audit[3154]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.470000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe2a23f130 a2=0 a3=7ffe2a23f11c items=0 ppid=3028 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 22 01:03:37.485000 audit[3155]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.485000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc762d5330 a2=0 a3=7ffc762d531c items=0 ppid=3028 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 01:03:37.500000 audit[3157]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.500000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9778ec50 a2=0 a3=7ffd9778ec3c items=0 ppid=3028 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 01:03:37.506000 audit[3158]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.506000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4636b280 a2=0 a3=7ffc4636b26c items=0 ppid=3028 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 01:03:37.521000 audit[3160]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.521000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff969aba00 a2=0 a3=7fff969ab9ec items=0 ppid=3028 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 01:03:37.560000 audit[3163]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.560000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd73062cf0 a2=0 a3=7ffd73062cdc items=0 ppid=3028 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 22 01:03:37.582000 audit[3166]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.582000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff53605580 a2=0 a3=7fff5360556c items=0 ppid=3028 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 22 01:03:37.588000 audit[3167]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.588000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebca1d060 a2=0 a3=7ffebca1d04c items=0 ppid=3028 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 01:03:37.605000 audit[3169]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.605000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffee46a05e0 a2=0 a3=7ffee46a05cc items=0 ppid=3028 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.605000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 01:03:37.627000 audit[3172]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.627000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff168c6400 a2=0 a3=7fff168c63ec items=0 ppid=3028 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 01:03:37.633000 audit[3173]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.633000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5f5dee20 a2=0 a3=7ffc5f5dee0c items=0 ppid=3028 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 01:03:37.650000 audit[3175]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.650000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc067915b0 a2=0 a3=7ffc0679159c items=0 ppid=3028 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 01:03:37.658000 audit[3176]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.658000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc71496990 a2=0 a3=7ffc7149697c items=0 ppid=3028 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.658000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 01:03:37.670000 audit[3178]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.670000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe1daa7680 a2=0 a3=7ffe1daa766c items=0 ppid=3028 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 01:03:37.685000 audit[3181]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 01:03:37.685000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffecc025cc0 a2=0 a3=7ffecc025cac items=0 ppid=3028 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 01:03:37.698000 audit[3183]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 01:03:37.698000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe7d7f6880 a2=0 a3=7ffe7d7f686c items=0 ppid=3028 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.698000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:37.698000 audit[3183]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 01:03:37.698000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe7d7f6880 a2=0 a3=7ffe7d7f686c items=0 ppid=3028 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:37.698000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:37.892047 kubelet[2860]: E0122 01:03:37.891821 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:42.870234 containerd[1637]: time="2026-01-22T01:03:42.870022111Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:42.874128 containerd[1637]: time="2026-01-22T01:03:42.874067637Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 22 01:03:42.875834 containerd[1637]: time="2026-01-22T01:03:42.875740402Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:42.883828 containerd[1637]: time="2026-01-22T01:03:42.883078560Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:42.883828 containerd[1637]: time="2026-01-22T01:03:42.883729523Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.886391034s" Jan 22 01:03:42.883828 containerd[1637]: time="2026-01-22T01:03:42.883755491Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 22 01:03:42.894836 containerd[1637]: time="2026-01-22T01:03:42.894681544Z" level=info msg="CreateContainer within sandbox \"cbe4fd037e8192a809dd1536ae9f409ef95d1ed7fca9440694db72e574a0f699\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 22 01:03:42.916776 containerd[1637]: time="2026-01-22T01:03:42.916516220Z" level=info msg="Container b52653517636a90aed639f85cace022b2595c28679267696569bdd77dc94543c: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:03:42.931894 containerd[1637]: time="2026-01-22T01:03:42.931720293Z" level=info msg="CreateContainer within sandbox \"cbe4fd037e8192a809dd1536ae9f409ef95d1ed7fca9440694db72e574a0f699\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b52653517636a90aed639f85cace022b2595c28679267696569bdd77dc94543c\"" Jan 22 01:03:42.933701 containerd[1637]: time="2026-01-22T01:03:42.933010106Z" level=info msg="StartContainer for \"b52653517636a90aed639f85cace022b2595c28679267696569bdd77dc94543c\"" Jan 22 01:03:42.934815 containerd[1637]: time="2026-01-22T01:03:42.934787266Z" level=info msg="connecting to shim b52653517636a90aed639f85cace022b2595c28679267696569bdd77dc94543c" address="unix:///run/containerd/s/fc5d95524c32fa1088f89f9184fa7fe43eb8355f19d985103e60242a66f69f9e" protocol=ttrpc version=3 Jan 22 01:03:42.974077 systemd[1]: Started cri-containerd-b52653517636a90aed639f85cace022b2595c28679267696569bdd77dc94543c.scope - libcontainer container b52653517636a90aed639f85cace022b2595c28679267696569bdd77dc94543c. Jan 22 01:03:43.006000 audit: BPF prog-id=146 op=LOAD Jan 22 01:03:43.015531 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 22 01:03:43.015640 kernel: audit: type=1334 audit(1769043823.006:499): prog-id=146 op=LOAD Jan 22 01:03:43.009000 audit: BPF prog-id=147 op=LOAD Jan 22 01:03:43.023844 kernel: audit: type=1334 audit(1769043823.009:500): prog-id=147 op=LOAD Jan 22 01:03:43.009000 audit[3188]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.059463 kernel: audit: type=1300 audit(1769043823.009:500): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.059605 kernel: audit: type=1327 audit(1769043823.009:500): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.059656 kernel: audit: type=1334 audit(1769043823.009:501): prog-id=147 op=UNLOAD Jan 22 01:03:43.009000 audit: BPF prog-id=147 op=UNLOAD Jan 22 01:03:43.009000 audit[3188]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.089861 kernel: audit: type=1300 audit(1769043823.009:501): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.090704 kernel: audit: type=1327 audit(1769043823.009:501): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.009000 audit: BPF prog-id=148 op=LOAD Jan 22 01:03:43.118027 kernel: audit: type=1334 audit(1769043823.009:502): prog-id=148 op=LOAD Jan 22 01:03:43.118126 kernel: audit: type=1300 audit(1769043823.009:502): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.009000 audit[3188]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.139596 containerd[1637]: time="2026-01-22T01:03:43.139110477Z" level=info msg="StartContainer for \"b52653517636a90aed639f85cace022b2595c28679267696569bdd77dc94543c\" returns successfully" Jan 22 01:03:43.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.010000 audit: BPF prog-id=149 op=LOAD Jan 22 01:03:43.170645 kernel: audit: type=1327 audit(1769043823.009:502): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.010000 audit[3188]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.010000 audit: BPF prog-id=149 op=UNLOAD Jan 22 01:03:43.010000 audit[3188]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.010000 audit: BPF prog-id=148 op=UNLOAD Jan 22 01:03:43.010000 audit[3188]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:43.011000 audit: BPF prog-id=150 op=LOAD Jan 22 01:03:43.011000 audit[3188]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2972 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:43.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323635333531373633366139306165643633396638356361636530 Jan 22 01:03:50.532156 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 22 01:03:50.530000 audit[1833]: USER_END pid=1833 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:03:50.541503 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 22 01:03:50.541623 kernel: audit: type=1106 audit(1769043830.530:507): pid=1833 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:03:50.553716 sshd[1832]: Connection closed by 10.0.0.1 port 35916 Jan 22 01:03:50.558624 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Jan 22 01:03:50.531000 audit[1833]: CRED_DISP pid=1833 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:03:50.579991 kernel: audit: type=1104 audit(1769043830.531:508): pid=1833 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 01:03:50.572000 audit[1829]: USER_END pid=1829 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:03:50.583495 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:35916.service: Deactivated successfully. Jan 22 01:03:50.591487 systemd[1]: session-7.scope: Deactivated successfully. Jan 22 01:03:50.592110 systemd[1]: session-7.scope: Consumed 9.037s CPU time, 218.3M memory peak. Jan 22 01:03:50.597655 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Jan 22 01:03:50.600063 systemd-logind[1609]: Removed session 7. Jan 22 01:03:50.605558 kernel: audit: type=1106 audit(1769043830.572:509): pid=1829 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:03:50.572000 audit[1829]: CRED_DISP pid=1829 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:03:50.625023 kernel: audit: type=1104 audit(1769043830.572:510): pid=1829 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:03:50.625154 kernel: audit: type=1131 audit(1769043830.582:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.144:22-10.0.0.1:35916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:50.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.144:22-10.0.0.1:35916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:03:51.377983 kernel: audit: type=1325 audit(1769043831.360:512): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:51.360000 audit[3280]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:51.360000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdc8dd14c0 a2=0 a3=7ffdc8dd14ac items=0 ppid=3028 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:51.421777 kernel: audit: type=1300 audit(1769043831.360:512): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdc8dd14c0 a2=0 a3=7ffdc8dd14ac items=0 ppid=3028 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:51.421912 kernel: audit: type=1327 audit(1769043831.360:512): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:51.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:51.429000 audit[3280]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:51.446717 kernel: audit: type=1325 audit(1769043831.429:513): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:51.429000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc8dd14c0 a2=0 a3=0 items=0 ppid=3028 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:51.475620 kernel: audit: type=1300 audit(1769043831.429:513): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc8dd14c0 a2=0 a3=0 items=0 ppid=3028 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:51.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:51.525000 audit[3283]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:51.525000 audit[3283]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffef6221fb0 a2=0 a3=7ffef6221f9c items=0 ppid=3028 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:51.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:51.534000 audit[3283]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:51.534000 audit[3283]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef6221fb0 a2=0 a3=0 items=0 ppid=3028 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:51.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:54.027000 audit[3286]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:54.027000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff80846030 a2=0 a3=7fff8084601c items=0 ppid=3028 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:54.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:54.034000 audit[3286]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:54.034000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff80846030 a2=0 a3=0 items=0 ppid=3028 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:54.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:54.099000 audit[3288]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:54.099000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc771d0140 a2=0 a3=7ffc771d012c items=0 ppid=3028 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:54.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:54.105000 audit[3288]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:54.105000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc771d0140 a2=0 a3=0 items=0 ppid=3028 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:54.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:55.159000 audit[3290]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:55.159000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc70cead20 a2=0 a3=7ffc70cead0c items=0 ppid=3028 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:55.159000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:55.168000 audit[3290]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:55.168000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc70cead20 a2=0 a3=0 items=0 ppid=3028 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:55.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:56.234570 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 22 01:03:56.234730 kernel: audit: type=1325 audit(1769043836.220:522): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:56.220000 audit[3292]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:56.220000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff5d760550 a2=0 a3=7fff5d76053c items=0 ppid=3028 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.266520 kernel: audit: type=1300 audit(1769043836.220:522): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff5d760550 a2=0 a3=7fff5d76053c items=0 ppid=3028 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.266752 kernel: audit: type=1327 audit(1769043836.220:522): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:56.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:56.277000 audit[3292]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:56.277000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5d760550 a2=0 a3=0 items=0 ppid=3028 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.315711 kubelet[2860]: I0122 01:03:56.299680 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-26cww" podStartSLOduration=14.401623012 podStartE2EDuration="21.299663605s" podCreationTimestamp="2026-01-22 01:03:35 +0000 UTC" firstStartedPulling="2026-01-22 01:03:35.988793461 +0000 UTC m=+5.645986641" lastFinishedPulling="2026-01-22 01:03:42.886834056 +0000 UTC m=+12.544027234" observedRunningTime="2026-01-22 01:03:43.947349233 +0000 UTC m=+13.604542442" watchObservedRunningTime="2026-01-22 01:03:56.299663605 +0000 UTC m=+25.956856804" Jan 22 01:03:56.318492 kernel: audit: type=1325 audit(1769043836.277:523): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:56.318558 kernel: audit: type=1300 audit(1769043836.277:523): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5d760550 a2=0 a3=0 items=0 ppid=3028 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.318589 kernel: audit: type=1327 audit(1769043836.277:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:56.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:56.367160 systemd[1]: Created slice kubepods-besteffort-pod30fd5741_2697_4d2b_9d5d_925e720efe0c.slice - libcontainer container kubepods-besteffort-pod30fd5741_2697_4d2b_9d5d_925e720efe0c.slice. Jan 22 01:03:56.402072 kubelet[2860]: I0122 01:03:56.401821 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30fd5741-2697-4d2b-9d5d-925e720efe0c-tigera-ca-bundle\") pod \"calico-typha-7c68b5d78b-wcbk8\" (UID: \"30fd5741-2697-4d2b-9d5d-925e720efe0c\") " pod="calico-system/calico-typha-7c68b5d78b-wcbk8" Jan 22 01:03:56.402072 kubelet[2860]: I0122 01:03:56.401913 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/30fd5741-2697-4d2b-9d5d-925e720efe0c-typha-certs\") pod \"calico-typha-7c68b5d78b-wcbk8\" (UID: \"30fd5741-2697-4d2b-9d5d-925e720efe0c\") " pod="calico-system/calico-typha-7c68b5d78b-wcbk8" Jan 22 01:03:56.402072 kubelet[2860]: I0122 01:03:56.401933 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2c7d\" (UniqueName: \"kubernetes.io/projected/30fd5741-2697-4d2b-9d5d-925e720efe0c-kube-api-access-z2c7d\") pod \"calico-typha-7c68b5d78b-wcbk8\" (UID: \"30fd5741-2697-4d2b-9d5d-925e720efe0c\") " pod="calico-system/calico-typha-7c68b5d78b-wcbk8" Jan 22 01:03:56.487063 systemd[1]: Created slice kubepods-besteffort-pod5474ea79_2c7a_4af6_a56b_82ee693f9b4e.slice - libcontainer container kubepods-besteffort-pod5474ea79_2c7a_4af6_a56b_82ee693f9b4e.slice. Jan 22 01:03:56.503740 kubelet[2860]: I0122 01:03:56.503328 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-flexvol-driver-host\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.503740 kubelet[2860]: I0122 01:03:56.503513 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-policysync\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.503740 kubelet[2860]: I0122 01:03:56.503533 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-tigera-ca-bundle\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.503740 kubelet[2860]: I0122 01:03:56.503549 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-lib-modules\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.503740 kubelet[2860]: I0122 01:03:56.503565 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-xtables-lock\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.504277 kubelet[2860]: I0122 01:03:56.503579 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5xss\" (UniqueName: \"kubernetes.io/projected/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-kube-api-access-w5xss\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.504277 kubelet[2860]: I0122 01:03:56.503643 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-cni-bin-dir\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.504277 kubelet[2860]: I0122 01:03:56.503684 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-cni-log-dir\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.504277 kubelet[2860]: I0122 01:03:56.503715 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-cni-net-dir\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.504277 kubelet[2860]: I0122 01:03:56.503739 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-var-run-calico\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.507793 kubelet[2860]: I0122 01:03:56.503779 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-node-certs\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.507793 kubelet[2860]: I0122 01:03:56.503812 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5474ea79-2c7a-4af6-a56b-82ee693f9b4e-var-lib-calico\") pod \"calico-node-qqnfk\" (UID: \"5474ea79-2c7a-4af6-a56b-82ee693f9b4e\") " pod="calico-system/calico-node-qqnfk" Jan 22 01:03:56.620774 kubelet[2860]: E0122 01:03:56.620626 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.620774 kubelet[2860]: W0122 01:03:56.620696 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.620774 kubelet[2860]: E0122 01:03:56.620724 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.627476 kubelet[2860]: E0122 01:03:56.627328 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.627476 kubelet[2860]: W0122 01:03:56.627354 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.627583 kubelet[2860]: E0122 01:03:56.627503 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.643016 kubelet[2860]: E0122 01:03:56.642952 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:03:56.672803 kubelet[2860]: E0122 01:03:56.672340 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.672803 kubelet[2860]: W0122 01:03:56.672760 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.672803 kubelet[2860]: E0122 01:03:56.672793 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.676102 kubelet[2860]: E0122 01:03:56.674978 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:56.677993 containerd[1637]: time="2026-01-22T01:03:56.677844036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c68b5d78b-wcbk8,Uid:30fd5741-2697-4d2b-9d5d-925e720efe0c,Namespace:calico-system,Attempt:0,}" Jan 22 01:03:56.679628 kubelet[2860]: E0122 01:03:56.679358 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.679628 kubelet[2860]: W0122 01:03:56.679494 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.679628 kubelet[2860]: E0122 01:03:56.679518 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.681228 kubelet[2860]: E0122 01:03:56.681053 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.681228 kubelet[2860]: W0122 01:03:56.681068 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.681228 kubelet[2860]: E0122 01:03:56.681082 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.683360 kubelet[2860]: E0122 01:03:56.682966 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.683360 kubelet[2860]: W0122 01:03:56.682984 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.683360 kubelet[2860]: E0122 01:03:56.682999 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.684113 kubelet[2860]: E0122 01:03:56.683879 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.684113 kubelet[2860]: W0122 01:03:56.683892 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.684113 kubelet[2860]: E0122 01:03:56.683904 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.686696 kubelet[2860]: E0122 01:03:56.686479 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.686696 kubelet[2860]: W0122 01:03:56.686500 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.686696 kubelet[2860]: E0122 01:03:56.686515 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.692734 kubelet[2860]: E0122 01:03:56.692584 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.692935 kubelet[2860]: W0122 01:03:56.692836 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.692935 kubelet[2860]: E0122 01:03:56.692860 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.694038 kubelet[2860]: E0122 01:03:56.693941 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.694038 kubelet[2860]: W0122 01:03:56.694013 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.694038 kubelet[2860]: E0122 01:03:56.694032 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.696593 kubelet[2860]: E0122 01:03:56.696511 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.696593 kubelet[2860]: W0122 01:03:56.696526 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.696593 kubelet[2860]: E0122 01:03:56.696540 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.697575 kubelet[2860]: E0122 01:03:56.697138 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.697575 kubelet[2860]: W0122 01:03:56.697155 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.697575 kubelet[2860]: E0122 01:03:56.697168 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.697711 kubelet[2860]: E0122 01:03:56.697640 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.697711 kubelet[2860]: W0122 01:03:56.697652 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.697711 kubelet[2860]: E0122 01:03:56.697664 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.698297 kubelet[2860]: E0122 01:03:56.698018 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.698297 kubelet[2860]: W0122 01:03:56.698232 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.698297 kubelet[2860]: E0122 01:03:56.698246 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.698860 kubelet[2860]: E0122 01:03:56.698838 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.698860 kubelet[2860]: W0122 01:03:56.698858 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.698958 kubelet[2860]: E0122 01:03:56.698872 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.699864 kubelet[2860]: E0122 01:03:56.699825 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.699864 kubelet[2860]: W0122 01:03:56.699839 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.699864 kubelet[2860]: E0122 01:03:56.699853 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.700536 kubelet[2860]: E0122 01:03:56.700349 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.700536 kubelet[2860]: W0122 01:03:56.700464 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.700536 kubelet[2860]: E0122 01:03:56.700479 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.701125 kubelet[2860]: E0122 01:03:56.701017 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.701125 kubelet[2860]: W0122 01:03:56.701070 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.701125 kubelet[2860]: E0122 01:03:56.701083 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.702145 kubelet[2860]: E0122 01:03:56.701689 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.702145 kubelet[2860]: W0122 01:03:56.701702 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.702145 kubelet[2860]: E0122 01:03:56.701713 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.702948 kubelet[2860]: E0122 01:03:56.702701 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.702948 kubelet[2860]: W0122 01:03:56.702713 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.702948 kubelet[2860]: E0122 01:03:56.702723 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.703300 kubelet[2860]: E0122 01:03:56.703276 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.703300 kubelet[2860]: W0122 01:03:56.703290 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.703524 kubelet[2860]: E0122 01:03:56.703303 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.704292 kubelet[2860]: E0122 01:03:56.703931 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.704292 kubelet[2860]: W0122 01:03:56.703993 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.704292 kubelet[2860]: E0122 01:03:56.704009 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.709119 kubelet[2860]: E0122 01:03:56.708619 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.709119 kubelet[2860]: W0122 01:03:56.708635 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.709119 kubelet[2860]: E0122 01:03:56.708650 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.711756 kubelet[2860]: E0122 01:03:56.711684 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.711756 kubelet[2860]: W0122 01:03:56.711705 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.711756 kubelet[2860]: E0122 01:03:56.711726 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.712756 kubelet[2860]: I0122 01:03:56.711764 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3eaef106-75d7-42c8-a82e-57ae58f4f9cb-varrun\") pod \"csi-node-driver-8grjm\" (UID: \"3eaef106-75d7-42c8-a82e-57ae58f4f9cb\") " pod="calico-system/csi-node-driver-8grjm" Jan 22 01:03:56.715510 kubelet[2860]: E0122 01:03:56.714540 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.715510 kubelet[2860]: W0122 01:03:56.714562 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.715510 kubelet[2860]: E0122 01:03:56.714578 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.715510 kubelet[2860]: I0122 01:03:56.714654 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3eaef106-75d7-42c8-a82e-57ae58f4f9cb-registration-dir\") pod \"csi-node-driver-8grjm\" (UID: \"3eaef106-75d7-42c8-a82e-57ae58f4f9cb\") " pod="calico-system/csi-node-driver-8grjm" Jan 22 01:03:56.718898 kubelet[2860]: E0122 01:03:56.718675 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.718898 kubelet[2860]: W0122 01:03:56.718744 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.718898 kubelet[2860]: E0122 01:03:56.718765 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.719440 kubelet[2860]: I0122 01:03:56.719084 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3eaef106-75d7-42c8-a82e-57ae58f4f9cb-kubelet-dir\") pod \"csi-node-driver-8grjm\" (UID: \"3eaef106-75d7-42c8-a82e-57ae58f4f9cb\") " pod="calico-system/csi-node-driver-8grjm" Jan 22 01:03:56.723500 kubelet[2860]: E0122 01:03:56.723065 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.723500 kubelet[2860]: W0122 01:03:56.723136 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.723500 kubelet[2860]: E0122 01:03:56.723157 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.728725 kubelet[2860]: E0122 01:03:56.728681 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.728725 kubelet[2860]: W0122 01:03:56.728699 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.728725 kubelet[2860]: E0122 01:03:56.728717 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.732699 kubelet[2860]: E0122 01:03:56.732546 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.732779 kubelet[2860]: W0122 01:03:56.732747 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.732779 kubelet[2860]: E0122 01:03:56.732769 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.734474 kubelet[2860]: I0122 01:03:56.733160 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3eaef106-75d7-42c8-a82e-57ae58f4f9cb-socket-dir\") pod \"csi-node-driver-8grjm\" (UID: \"3eaef106-75d7-42c8-a82e-57ae58f4f9cb\") " pod="calico-system/csi-node-driver-8grjm" Jan 22 01:03:56.734474 kubelet[2860]: E0122 01:03:56.733691 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.734474 kubelet[2860]: W0122 01:03:56.733704 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.734474 kubelet[2860]: E0122 01:03:56.733719 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.734859 kubelet[2860]: E0122 01:03:56.734553 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.734859 kubelet[2860]: W0122 01:03:56.734569 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.734859 kubelet[2860]: E0122 01:03:56.734793 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.739618 kubelet[2860]: E0122 01:03:56.735968 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.739618 kubelet[2860]: W0122 01:03:56.735987 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.739618 kubelet[2860]: E0122 01:03:56.736005 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.739780 kubelet[2860]: I0122 01:03:56.739634 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmhr\" (UniqueName: \"kubernetes.io/projected/3eaef106-75d7-42c8-a82e-57ae58f4f9cb-kube-api-access-lxmhr\") pod \"csi-node-driver-8grjm\" (UID: \"3eaef106-75d7-42c8-a82e-57ae58f4f9cb\") " pod="calico-system/csi-node-driver-8grjm" Jan 22 01:03:56.739822 kubelet[2860]: E0122 01:03:56.739786 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.739822 kubelet[2860]: W0122 01:03:56.739799 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.739822 kubelet[2860]: E0122 01:03:56.739815 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.742583 kubelet[2860]: E0122 01:03:56.742141 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.742583 kubelet[2860]: W0122 01:03:56.742158 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.742583 kubelet[2860]: E0122 01:03:56.742240 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.744572 kubelet[2860]: E0122 01:03:56.744338 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.744572 kubelet[2860]: W0122 01:03:56.744350 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.744572 kubelet[2860]: E0122 01:03:56.744361 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.745549 kubelet[2860]: E0122 01:03:56.745334 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.745549 kubelet[2860]: W0122 01:03:56.745468 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.745549 kubelet[2860]: E0122 01:03:56.745479 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.750762 kubelet[2860]: E0122 01:03:56.747074 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.750762 kubelet[2860]: W0122 01:03:56.747345 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.750762 kubelet[2860]: E0122 01:03:56.747522 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.754122 kubelet[2860]: E0122 01:03:56.753907 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.754122 kubelet[2860]: W0122 01:03:56.753974 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.754122 kubelet[2860]: E0122 01:03:56.753994 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.769353 containerd[1637]: time="2026-01-22T01:03:56.768763002Z" level=info msg="connecting to shim b36c7a8c46b375f25c2cf51d6c8c932ec8fd80354d924f9c905a8e239e7aed6c" address="unix:///run/containerd/s/b7d7f04fc017ec8fdc563d770be3b65375890d2330e030a203a592e73b26e3e6" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:03:56.798679 kubelet[2860]: E0122 01:03:56.798633 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:56.800788 containerd[1637]: time="2026-01-22T01:03:56.800730376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qqnfk,Uid:5474ea79-2c7a-4af6-a56b-82ee693f9b4e,Namespace:calico-system,Attempt:0,}" Jan 22 01:03:56.848125 kubelet[2860]: E0122 01:03:56.847776 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.848125 kubelet[2860]: W0122 01:03:56.847802 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.848125 kubelet[2860]: E0122 01:03:56.847893 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.850067 kubelet[2860]: E0122 01:03:56.849871 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.850067 kubelet[2860]: W0122 01:03:56.850024 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.850067 kubelet[2860]: E0122 01:03:56.850046 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.851880 kubelet[2860]: E0122 01:03:56.851801 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.851880 kubelet[2860]: W0122 01:03:56.851873 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.851942 kubelet[2860]: E0122 01:03:56.851892 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.852689 kubelet[2860]: E0122 01:03:56.852642 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.852689 kubelet[2860]: W0122 01:03:56.852659 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.852689 kubelet[2860]: E0122 01:03:56.852675 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.853891 kubelet[2860]: E0122 01:03:56.853654 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.853891 kubelet[2860]: W0122 01:03:56.853721 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.853891 kubelet[2860]: E0122 01:03:56.853738 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.858504 kubelet[2860]: E0122 01:03:56.858267 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.858504 kubelet[2860]: W0122 01:03:56.858284 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.872864 kubelet[2860]: E0122 01:03:56.872552 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.875751 kubelet[2860]: E0122 01:03:56.875723 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.877492 kubelet[2860]: W0122 01:03:56.875952 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.877492 kubelet[2860]: E0122 01:03:56.876034 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.877492 kubelet[2860]: E0122 01:03:56.876640 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.877492 kubelet[2860]: W0122 01:03:56.876652 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.877492 kubelet[2860]: E0122 01:03:56.876663 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.877806 kubelet[2860]: E0122 01:03:56.877680 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.877806 kubelet[2860]: W0122 01:03:56.877694 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.877806 kubelet[2860]: E0122 01:03:56.877706 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.879258 kubelet[2860]: E0122 01:03:56.878620 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.879258 kubelet[2860]: W0122 01:03:56.878632 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.879258 kubelet[2860]: E0122 01:03:56.878642 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.880056 kubelet[2860]: E0122 01:03:56.880033 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.880258 kubelet[2860]: W0122 01:03:56.880129 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.880321 kubelet[2860]: E0122 01:03:56.880266 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.881154 kubelet[2860]: E0122 01:03:56.881135 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.881306 systemd[1]: Started cri-containerd-b36c7a8c46b375f25c2cf51d6c8c932ec8fd80354d924f9c905a8e239e7aed6c.scope - libcontainer container b36c7a8c46b375f25c2cf51d6c8c932ec8fd80354d924f9c905a8e239e7aed6c. Jan 22 01:03:56.882287 kubelet[2860]: W0122 01:03:56.881305 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.882287 kubelet[2860]: E0122 01:03:56.881321 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.883118 kubelet[2860]: E0122 01:03:56.882927 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.883118 kubelet[2860]: W0122 01:03:56.882947 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.883118 kubelet[2860]: E0122 01:03:56.882961 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.885360 kubelet[2860]: E0122 01:03:56.884686 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.885360 kubelet[2860]: W0122 01:03:56.884703 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.885360 kubelet[2860]: E0122 01:03:56.884717 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.888549 kubelet[2860]: E0122 01:03:56.888313 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.888549 kubelet[2860]: W0122 01:03:56.888327 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.888549 kubelet[2860]: E0122 01:03:56.888338 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.890013 kubelet[2860]: E0122 01:03:56.889678 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.890521 kubelet[2860]: W0122 01:03:56.890106 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.890521 kubelet[2860]: E0122 01:03:56.890129 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.893568 kubelet[2860]: E0122 01:03:56.893550 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.893739 kubelet[2860]: W0122 01:03:56.893654 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.893739 kubelet[2860]: E0122 01:03:56.893676 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.895532 kubelet[2860]: E0122 01:03:56.895073 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.895532 kubelet[2860]: W0122 01:03:56.895090 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.895532 kubelet[2860]: E0122 01:03:56.895102 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.897828 kubelet[2860]: E0122 01:03:56.896734 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.897828 kubelet[2860]: W0122 01:03:56.896751 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.897828 kubelet[2860]: E0122 01:03:56.896767 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.899317 kubelet[2860]: E0122 01:03:56.897941 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.899317 kubelet[2860]: W0122 01:03:56.897956 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.899317 kubelet[2860]: E0122 01:03:56.897971 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.899895 containerd[1637]: time="2026-01-22T01:03:56.899847020Z" level=info msg="connecting to shim fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e" address="unix:///run/containerd/s/506a759036fed7bd32b4ec8aa6f1c22554eb3082877e3f2103422521552fa53a" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:03:56.900573 kubelet[2860]: E0122 01:03:56.900514 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.900573 kubelet[2860]: W0122 01:03:56.900535 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.900573 kubelet[2860]: E0122 01:03:56.900551 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.901865 kubelet[2860]: E0122 01:03:56.901843 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.901958 kubelet[2860]: W0122 01:03:56.901940 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.902041 kubelet[2860]: E0122 01:03:56.902026 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.906087 kubelet[2860]: E0122 01:03:56.906039 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.906087 kubelet[2860]: W0122 01:03:56.906055 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.906087 kubelet[2860]: E0122 01:03:56.906069 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.908949 kubelet[2860]: E0122 01:03:56.908840 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.908949 kubelet[2860]: W0122 01:03:56.908858 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.908949 kubelet[2860]: E0122 01:03:56.908874 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.911919 kubelet[2860]: E0122 01:03:56.911858 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.911919 kubelet[2860]: W0122 01:03:56.911877 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.911919 kubelet[2860]: E0122 01:03:56.911891 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.934533 kubelet[2860]: E0122 01:03:56.934359 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:03:56.934533 kubelet[2860]: W0122 01:03:56.934471 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:03:56.934533 kubelet[2860]: E0122 01:03:56.934490 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:03:56.941000 audit: BPF prog-id=151 op=LOAD Jan 22 01:03:56.949830 kernel: audit: type=1334 audit(1769043836.941:524): prog-id=151 op=LOAD Jan 22 01:03:56.949927 kernel: audit: type=1334 audit(1769043836.942:525): prog-id=152 op=LOAD Jan 22 01:03:56.942000 audit: BPF prog-id=152 op=LOAD Jan 22 01:03:56.942000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.982331 kernel: audit: type=1300 audit(1769043836.942:525): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:56.942000 audit: BPF prog-id=152 op=UNLOAD Jan 22 01:03:57.009574 kernel: audit: type=1327 audit(1769043836.942:525): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:56.942000 audit[3365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:56.942000 audit: BPF prog-id=153 op=LOAD Jan 22 01:03:56.942000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:56.942000 audit: BPF prog-id=154 op=LOAD Jan 22 01:03:56.942000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:56.942000 audit: BPF prog-id=154 op=UNLOAD Jan 22 01:03:56.942000 audit[3365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:56.942000 audit: BPF prog-id=153 op=UNLOAD Jan 22 01:03:56.942000 audit[3365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:56.942000 audit: BPF prog-id=155 op=LOAD Jan 22 01:03:56.942000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3355 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:56.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233366337613863343662333735663235633263663531643663386339 Jan 22 01:03:57.019737 systemd[1]: Started cri-containerd-fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e.scope - libcontainer container fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e. Jan 22 01:03:57.077000 audit: BPF prog-id=156 op=LOAD Jan 22 01:03:57.079637 containerd[1637]: time="2026-01-22T01:03:57.079530326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c68b5d78b-wcbk8,Uid:30fd5741-2697-4d2b-9d5d-925e720efe0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b36c7a8c46b375f25c2cf51d6c8c932ec8fd80354d924f9c905a8e239e7aed6c\"" Jan 22 01:03:57.078000 audit: BPF prog-id=157 op=LOAD Jan 22 01:03:57.078000 audit[3431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3404 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663336261303765303261303636373666663937313531326435643635 Jan 22 01:03:57.079000 audit: BPF prog-id=157 op=UNLOAD Jan 22 01:03:57.079000 audit[3431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663336261303765303261303636373666663937313531326435643635 Jan 22 01:03:57.079000 audit: BPF prog-id=158 op=LOAD Jan 22 01:03:57.079000 audit[3431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3404 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663336261303765303261303636373666663937313531326435643635 Jan 22 01:03:57.079000 audit: BPF prog-id=159 op=LOAD Jan 22 01:03:57.079000 audit[3431]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3404 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663336261303765303261303636373666663937313531326435643635 Jan 22 01:03:57.079000 audit: BPF prog-id=159 op=UNLOAD Jan 22 01:03:57.079000 audit[3431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663336261303765303261303636373666663937313531326435643635 Jan 22 01:03:57.079000 audit: BPF prog-id=158 op=UNLOAD Jan 22 01:03:57.079000 audit[3431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663336261303765303261303636373666663937313531326435643635 Jan 22 01:03:57.080000 audit: BPF prog-id=160 op=LOAD Jan 22 01:03:57.080000 audit[3431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3404 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663336261303765303261303636373666663937313531326435643635 Jan 22 01:03:57.089294 kubelet[2860]: E0122 01:03:57.088655 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:57.100625 containerd[1637]: time="2026-01-22T01:03:57.100583074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 22 01:03:57.158060 containerd[1637]: time="2026-01-22T01:03:57.157817515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qqnfk,Uid:5474ea79-2c7a-4af6-a56b-82ee693f9b4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e\"" Jan 22 01:03:57.159243 kubelet[2860]: E0122 01:03:57.159144 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:03:57.356000 audit[3465]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:57.356000 audit[3465]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff89d46850 a2=0 a3=7fff89d4683c items=0 ppid=3028 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:57.371000 audit[3465]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:03:57.371000 audit[3465]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff89d46850 a2=0 a3=0 items=0 ppid=3028 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:57.371000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:03:57.725205 kubelet[2860]: E0122 01:03:57.725026 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:03:57.894619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342497324.mount: Deactivated successfully. Jan 22 01:03:59.055767 containerd[1637]: time="2026-01-22T01:03:59.053089327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:59.056581 containerd[1637]: time="2026-01-22T01:03:59.056561708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33738263" Jan 22 01:03:59.098860 containerd[1637]: time="2026-01-22T01:03:59.098723777Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:59.103831 containerd[1637]: time="2026-01-22T01:03:59.103773850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:59.107346 containerd[1637]: time="2026-01-22T01:03:59.104091191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.002950791s" Jan 22 01:03:59.107346 containerd[1637]: time="2026-01-22T01:03:59.104119523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 22 01:03:59.107346 containerd[1637]: time="2026-01-22T01:03:59.105766596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 22 01:03:59.134199 containerd[1637]: time="2026-01-22T01:03:59.134051993Z" level=info msg="CreateContainer within sandbox \"b36c7a8c46b375f25c2cf51d6c8c932ec8fd80354d924f9c905a8e239e7aed6c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 22 01:03:59.145027 containerd[1637]: time="2026-01-22T01:03:59.144729978Z" level=info msg="Container 29338f3bc1569f689ccc007201d9c9d4d42941da5fb05f8dc0f80425d4e0225f: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:03:59.160027 containerd[1637]: time="2026-01-22T01:03:59.159976852Z" level=info msg="CreateContainer within sandbox \"b36c7a8c46b375f25c2cf51d6c8c932ec8fd80354d924f9c905a8e239e7aed6c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"29338f3bc1569f689ccc007201d9c9d4d42941da5fb05f8dc0f80425d4e0225f\"" Jan 22 01:03:59.161082 containerd[1637]: time="2026-01-22T01:03:59.160967766Z" level=info msg="StartContainer for \"29338f3bc1569f689ccc007201d9c9d4d42941da5fb05f8dc0f80425d4e0225f\"" Jan 22 01:03:59.162867 containerd[1637]: time="2026-01-22T01:03:59.162821210Z" level=info msg="connecting to shim 29338f3bc1569f689ccc007201d9c9d4d42941da5fb05f8dc0f80425d4e0225f" address="unix:///run/containerd/s/b7d7f04fc017ec8fdc563d770be3b65375890d2330e030a203a592e73b26e3e6" protocol=ttrpc version=3 Jan 22 01:03:59.202896 systemd[1]: Started cri-containerd-29338f3bc1569f689ccc007201d9c9d4d42941da5fb05f8dc0f80425d4e0225f.scope - libcontainer container 29338f3bc1569f689ccc007201d9c9d4d42941da5fb05f8dc0f80425d4e0225f. Jan 22 01:03:59.237000 audit: BPF prog-id=161 op=LOAD Jan 22 01:03:59.238000 audit: BPF prog-id=162 op=LOAD Jan 22 01:03:59.238000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3355 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:59.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333338663362633135363966363839636363303037323031643963 Jan 22 01:03:59.238000 audit: BPF prog-id=162 op=UNLOAD Jan 22 01:03:59.238000 audit[3476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3355 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:59.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333338663362633135363966363839636363303037323031643963 Jan 22 01:03:59.238000 audit: BPF prog-id=163 op=LOAD Jan 22 01:03:59.238000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3355 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:59.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333338663362633135363966363839636363303037323031643963 Jan 22 01:03:59.238000 audit: BPF prog-id=164 op=LOAD Jan 22 01:03:59.238000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3355 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:59.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333338663362633135363966363839636363303037323031643963 Jan 22 01:03:59.238000 audit: BPF prog-id=164 op=UNLOAD Jan 22 01:03:59.238000 audit[3476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3355 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:59.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333338663362633135363966363839636363303037323031643963 Jan 22 01:03:59.238000 audit: BPF prog-id=163 op=UNLOAD Jan 22 01:03:59.238000 audit[3476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3355 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:59.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333338663362633135363966363839636363303037323031643963 Jan 22 01:03:59.238000 audit: BPF prog-id=165 op=LOAD Jan 22 01:03:59.238000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3355 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:03:59.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333338663362633135363966363839636363303037323031643963 Jan 22 01:03:59.321688 containerd[1637]: time="2026-01-22T01:03:59.319741594Z" level=info msg="StartContainer for \"29338f3bc1569f689ccc007201d9c9d4d42941da5fb05f8dc0f80425d4e0225f\" returns successfully" Jan 22 01:03:59.720861 kubelet[2860]: E0122 01:03:59.720101 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:03:59.960660 containerd[1637]: time="2026-01-22T01:03:59.960565315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:59.961973 containerd[1637]: time="2026-01-22T01:03:59.961733889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 22 01:03:59.963545 containerd[1637]: time="2026-01-22T01:03:59.963492896Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:59.968186 containerd[1637]: time="2026-01-22T01:03:59.967955859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:03:59.968982 containerd[1637]: time="2026-01-22T01:03:59.968794322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 863.002428ms" Jan 22 01:03:59.968982 containerd[1637]: time="2026-01-22T01:03:59.968901097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 22 01:03:59.977456 containerd[1637]: time="2026-01-22T01:03:59.977309654Z" level=info msg="CreateContainer within sandbox \"fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 22 01:03:59.992822 containerd[1637]: time="2026-01-22T01:03:59.992734545Z" level=info msg="Container 0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:04:00.006175 containerd[1637]: time="2026-01-22T01:04:00.005929789Z" level=info msg="CreateContainer within sandbox \"fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e\"" Jan 22 01:04:00.007254 containerd[1637]: time="2026-01-22T01:04:00.007221847Z" level=info msg="StartContainer for \"0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e\"" Jan 22 01:04:00.010521 containerd[1637]: time="2026-01-22T01:04:00.010330815Z" level=info msg="connecting to shim 0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e" address="unix:///run/containerd/s/506a759036fed7bd32b4ec8aa6f1c22554eb3082877e3f2103422521552fa53a" protocol=ttrpc version=3 Jan 22 01:04:00.020563 kubelet[2860]: E0122 01:04:00.020346 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:00.035717 kubelet[2860]: E0122 01:04:00.035559 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.035717 kubelet[2860]: W0122 01:04:00.035623 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.035717 kubelet[2860]: E0122 01:04:00.035643 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.036691 kubelet[2860]: E0122 01:04:00.036621 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.036691 kubelet[2860]: W0122 01:04:00.036672 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.036691 kubelet[2860]: E0122 01:04:00.036684 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.037217 kubelet[2860]: E0122 01:04:00.037056 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.037217 kubelet[2860]: W0122 01:04:00.037107 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.037217 kubelet[2860]: E0122 01:04:00.037167 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.038594 kubelet[2860]: E0122 01:04:00.038265 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.038594 kubelet[2860]: W0122 01:04:00.038321 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.038594 kubelet[2860]: E0122 01:04:00.038332 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.040814 kubelet[2860]: E0122 01:04:00.040537 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.040814 kubelet[2860]: W0122 01:04:00.040603 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.040814 kubelet[2860]: E0122 01:04:00.040621 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.043271 kubelet[2860]: E0122 01:04:00.041484 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.043271 kubelet[2860]: W0122 01:04:00.041498 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.043271 kubelet[2860]: E0122 01:04:00.041511 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.045347 kubelet[2860]: E0122 01:04:00.045075 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.045347 kubelet[2860]: W0122 01:04:00.045204 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.045347 kubelet[2860]: E0122 01:04:00.045223 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.047289 kubelet[2860]: E0122 01:04:00.046227 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.047289 kubelet[2860]: W0122 01:04:00.046244 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.047289 kubelet[2860]: E0122 01:04:00.046256 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.048003 kubelet[2860]: E0122 01:04:00.047889 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.048692 kubelet[2860]: W0122 01:04:00.048065 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.049039 kubelet[2860]: E0122 01:04:00.048354 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.050680 kubelet[2860]: E0122 01:04:00.050601 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.051454 kubelet[2860]: W0122 01:04:00.050829 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.051454 kubelet[2860]: E0122 01:04:00.051058 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.056191 kubelet[2860]: I0122 01:04:00.055977 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c68b5d78b-wcbk8" podStartSLOduration=2.041856062 podStartE2EDuration="4.055965708s" podCreationTimestamp="2026-01-22 01:03:56 +0000 UTC" firstStartedPulling="2026-01-22 01:03:57.091357409 +0000 UTC m=+26.748550588" lastFinishedPulling="2026-01-22 01:03:59.105467045 +0000 UTC m=+28.762660234" observedRunningTime="2026-01-22 01:04:00.055488408 +0000 UTC m=+29.712681607" watchObservedRunningTime="2026-01-22 01:04:00.055965708 +0000 UTC m=+29.713158887" Jan 22 01:04:00.057708 kubelet[2860]: E0122 01:04:00.057485 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.057708 kubelet[2860]: W0122 01:04:00.057506 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.057708 kubelet[2860]: E0122 01:04:00.057520 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.058647 kubelet[2860]: E0122 01:04:00.058581 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.058703 kubelet[2860]: W0122 01:04:00.058649 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.058703 kubelet[2860]: E0122 01:04:00.058664 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.059200 kubelet[2860]: E0122 01:04:00.059100 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.059200 kubelet[2860]: W0122 01:04:00.059195 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.059289 kubelet[2860]: E0122 01:04:00.059213 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.059890 kubelet[2860]: E0122 01:04:00.059798 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.059890 kubelet[2860]: W0122 01:04:00.059870 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.059890 kubelet[2860]: E0122 01:04:00.059888 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.062361 kubelet[2860]: E0122 01:04:00.062289 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.062740 kubelet[2860]: W0122 01:04:00.062554 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.062777 kubelet[2860]: E0122 01:04:00.062740 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.074803 systemd[1]: Started cri-containerd-0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e.scope - libcontainer container 0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e. Jan 22 01:04:00.114547 kubelet[2860]: E0122 01:04:00.114293 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.114652 kubelet[2860]: W0122 01:04:00.114567 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.114652 kubelet[2860]: E0122 01:04:00.114598 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.115329 kubelet[2860]: E0122 01:04:00.115238 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.115329 kubelet[2860]: W0122 01:04:00.115312 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.115610 kubelet[2860]: E0122 01:04:00.115332 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.116198 kubelet[2860]: E0122 01:04:00.115977 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.116198 kubelet[2860]: W0122 01:04:00.115991 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.116198 kubelet[2860]: E0122 01:04:00.116002 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.116533 kubelet[2860]: E0122 01:04:00.116518 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.116609 kubelet[2860]: W0122 01:04:00.116597 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.116657 kubelet[2860]: E0122 01:04:00.116646 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.117625 kubelet[2860]: E0122 01:04:00.117610 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.117695 kubelet[2860]: W0122 01:04:00.117683 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.117744 kubelet[2860]: E0122 01:04:00.117734 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.118699 kubelet[2860]: E0122 01:04:00.118684 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.118757 kubelet[2860]: W0122 01:04:00.118746 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.118803 kubelet[2860]: E0122 01:04:00.118792 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.119767 kubelet[2860]: E0122 01:04:00.119730 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.119767 kubelet[2860]: W0122 01:04:00.119743 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.119767 kubelet[2860]: E0122 01:04:00.119752 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.120589 kubelet[2860]: E0122 01:04:00.120553 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.120589 kubelet[2860]: W0122 01:04:00.120565 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.120589 kubelet[2860]: E0122 01:04:00.120575 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.121678 kubelet[2860]: E0122 01:04:00.121641 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.121678 kubelet[2860]: W0122 01:04:00.121655 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.121678 kubelet[2860]: E0122 01:04:00.121664 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.122491 kubelet[2860]: E0122 01:04:00.122338 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.122491 kubelet[2860]: W0122 01:04:00.122356 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.122609 kubelet[2860]: E0122 01:04:00.122589 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.123630 kubelet[2860]: E0122 01:04:00.123581 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.123630 kubelet[2860]: W0122 01:04:00.123599 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.123630 kubelet[2860]: E0122 01:04:00.123611 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.125052 kubelet[2860]: E0122 01:04:00.125037 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.125204 kubelet[2860]: W0122 01:04:00.125100 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.125204 kubelet[2860]: E0122 01:04:00.125186 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.126222 kubelet[2860]: E0122 01:04:00.126202 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.126312 kubelet[2860]: W0122 01:04:00.126284 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.126312 kubelet[2860]: E0122 01:04:00.126299 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.130083 kubelet[2860]: E0122 01:04:00.130040 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.130083 kubelet[2860]: W0122 01:04:00.130054 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.130083 kubelet[2860]: E0122 01:04:00.130064 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.131595 kubelet[2860]: E0122 01:04:00.131546 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.131595 kubelet[2860]: W0122 01:04:00.131564 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.131595 kubelet[2860]: E0122 01:04:00.131577 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.132606 kubelet[2860]: E0122 01:04:00.132542 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.132606 kubelet[2860]: W0122 01:04:00.132570 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.132606 kubelet[2860]: E0122 01:04:00.132587 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.136056 kubelet[2860]: E0122 01:04:00.135564 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.136056 kubelet[2860]: W0122 01:04:00.135578 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.136056 kubelet[2860]: E0122 01:04:00.135589 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.136627 kubelet[2860]: E0122 01:04:00.136307 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 01:04:00.136627 kubelet[2860]: W0122 01:04:00.136464 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 01:04:00.136627 kubelet[2860]: E0122 01:04:00.136483 2860 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 01:04:00.182000 audit: BPF prog-id=166 op=LOAD Jan 22 01:04:00.182000 audit[3520]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3404 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:00.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643566383335386536366636336362393766333639366363643832 Jan 22 01:04:00.182000 audit: BPF prog-id=167 op=LOAD Jan 22 01:04:00.182000 audit[3520]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3404 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:00.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643566383335386536366636336362393766333639366363643832 Jan 22 01:04:00.182000 audit: BPF prog-id=167 op=UNLOAD Jan 22 01:04:00.182000 audit[3520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:00.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643566383335386536366636336362393766333639366363643832 Jan 22 01:04:00.182000 audit: BPF prog-id=166 op=UNLOAD Jan 22 01:04:00.182000 audit[3520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:00.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643566383335386536366636336362393766333639366363643832 Jan 22 01:04:00.182000 audit: BPF prog-id=168 op=LOAD Jan 22 01:04:00.182000 audit[3520]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3404 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:00.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643566383335386536366636336362393766333639366363643832 Jan 22 01:04:00.220349 containerd[1637]: time="2026-01-22T01:04:00.220270759Z" level=info msg="StartContainer for \"0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e\" returns successfully" Jan 22 01:04:00.243331 systemd[1]: cri-containerd-0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e.scope: Deactivated successfully. Jan 22 01:04:00.248000 audit: BPF prog-id=168 op=UNLOAD Jan 22 01:04:00.252270 containerd[1637]: time="2026-01-22T01:04:00.252191693Z" level=info msg="received container exit event container_id:\"0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e\" id:\"0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e\" pid:3551 exited_at:{seconds:1769043840 nanos:250960161}" Jan 22 01:04:00.303355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bd5f8358e66f63cb97f3696ccd823daede36cca3f14f25d0522665599f0e49e-rootfs.mount: Deactivated successfully. Jan 22 01:04:01.041029 kubelet[2860]: I0122 01:04:01.040918 2860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 01:04:01.041853 kubelet[2860]: E0122 01:04:01.041295 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:01.041853 kubelet[2860]: E0122 01:04:01.041778 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:01.044305 containerd[1637]: time="2026-01-22T01:04:01.044198467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 22 01:04:01.720022 kubelet[2860]: E0122 01:04:01.719253 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:03.379007 kubelet[2860]: I0122 01:04:03.378946 2860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 01:04:03.381532 kubelet[2860]: E0122 01:04:03.380977 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:03.463513 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 22 01:04:03.463671 kernel: audit: type=1325 audit(1769043843.451:556): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:03.451000 audit[3613]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:03.451000 audit[3613]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea8ea8c00 a2=0 a3=7ffea8ea8bec items=0 ppid=3028 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:03.501602 kernel: audit: type=1300 audit(1769043843.451:556): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea8ea8c00 a2=0 a3=7ffea8ea8bec items=0 ppid=3028 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:03.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:03.517918 kernel: audit: type=1327 audit(1769043843.451:556): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:03.518052 kernel: audit: type=1325 audit(1769043843.516:557): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:03.516000 audit[3613]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:03.516000 audit[3613]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffea8ea8c00 a2=0 a3=7ffea8ea8bec items=0 ppid=3028 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:03.553501 kernel: audit: type=1300 audit(1769043843.516:557): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffea8ea8c00 a2=0 a3=7ffea8ea8bec items=0 ppid=3028 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:03.566712 kernel: audit: type=1327 audit(1769043843.516:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:03.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:03.720487 kubelet[2860]: E0122 01:04:03.720290 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:04.050727 kubelet[2860]: E0122 01:04:04.050594 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:04.087304 containerd[1637]: time="2026-01-22T01:04:04.086961592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:04:04.088855 containerd[1637]: time="2026-01-22T01:04:04.088758741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 22 01:04:04.090944 containerd[1637]: time="2026-01-22T01:04:04.090851930Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:04:04.094271 containerd[1637]: time="2026-01-22T01:04:04.094187823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:04:04.095260 containerd[1637]: time="2026-01-22T01:04:04.094969999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.050723534s" Jan 22 01:04:04.095260 containerd[1637]: time="2026-01-22T01:04:04.095104337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 22 01:04:04.108129 containerd[1637]: time="2026-01-22T01:04:04.107725806Z" level=info msg="CreateContainer within sandbox \"fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 22 01:04:04.125501 containerd[1637]: time="2026-01-22T01:04:04.125305572Z" level=info msg="Container efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:04:04.140343 containerd[1637]: time="2026-01-22T01:04:04.140243742Z" level=info msg="CreateContainer within sandbox \"fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7\"" Jan 22 01:04:04.141783 containerd[1637]: time="2026-01-22T01:04:04.141710996Z" level=info msg="StartContainer for \"efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7\"" Jan 22 01:04:04.144519 containerd[1637]: time="2026-01-22T01:04:04.144285886Z" level=info msg="connecting to shim efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7" address="unix:///run/containerd/s/506a759036fed7bd32b4ec8aa6f1c22554eb3082877e3f2103422521552fa53a" protocol=ttrpc version=3 Jan 22 01:04:04.186857 systemd[1]: Started cri-containerd-efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7.scope - libcontainer container efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7. Jan 22 01:04:04.275000 audit: BPF prog-id=169 op=LOAD Jan 22 01:04:04.275000 audit[3618]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3404 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:04.298491 kernel: audit: type=1334 audit(1769043844.275:558): prog-id=169 op=LOAD Jan 22 01:04:04.298560 kernel: audit: type=1300 audit(1769043844.275:558): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3404 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:04.298596 kernel: audit: type=1327 audit(1769043844.275:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653561653939386365643965666433353337363033653839613935 Jan 22 01:04:04.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653561653939386365643965666433353337363033653839613935 Jan 22 01:04:04.276000 audit: BPF prog-id=170 op=LOAD Jan 22 01:04:04.318501 kernel: audit: type=1334 audit(1769043844.276:559): prog-id=170 op=LOAD Jan 22 01:04:04.276000 audit[3618]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3404 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:04.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653561653939386365643965666433353337363033653839613935 Jan 22 01:04:04.276000 audit: BPF prog-id=170 op=UNLOAD Jan 22 01:04:04.276000 audit[3618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:04.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653561653939386365643965666433353337363033653839613935 Jan 22 01:04:04.276000 audit: BPF prog-id=169 op=UNLOAD Jan 22 01:04:04.276000 audit[3618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:04.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653561653939386365643965666433353337363033653839613935 Jan 22 01:04:04.276000 audit: BPF prog-id=171 op=LOAD Jan 22 01:04:04.276000 audit[3618]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3404 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:04.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653561653939386365643965666433353337363033653839613935 Jan 22 01:04:04.348807 containerd[1637]: time="2026-01-22T01:04:04.348700789Z" level=info msg="StartContainer for \"efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7\" returns successfully" Jan 22 01:04:05.059431 kubelet[2860]: E0122 01:04:05.059230 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:05.285274 systemd[1]: cri-containerd-efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7.scope: Deactivated successfully. Jan 22 01:04:05.285863 systemd[1]: cri-containerd-efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7.scope: Consumed 968ms CPU time, 176.5M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 22 01:04:05.292000 audit: BPF prog-id=171 op=UNLOAD Jan 22 01:04:05.312545 containerd[1637]: time="2026-01-22T01:04:05.312117888Z" level=info msg="received container exit event container_id:\"efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7\" id:\"efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7\" pid:3632 exited_at:{seconds:1769043845 nanos:289788524}" Jan 22 01:04:05.365811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efe5ae998ced9efd3537603e89a957f1578d43fefb378a919b6e3702e60672b7-rootfs.mount: Deactivated successfully. Jan 22 01:04:05.381140 kubelet[2860]: I0122 01:04:05.380991 2860 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 22 01:04:05.478230 systemd[1]: Created slice kubepods-besteffort-pod3572850e_6c2e_4ed9_a568_75ca88ead4a5.slice - libcontainer container kubepods-besteffort-pod3572850e_6c2e_4ed9_a568_75ca88ead4a5.slice. Jan 22 01:04:05.502717 systemd[1]: Created slice kubepods-besteffort-pod410a0576_5e2f_4491_9946_abeea17a07fc.slice - libcontainer container kubepods-besteffort-pod410a0576_5e2f_4491_9946_abeea17a07fc.slice. Jan 22 01:04:05.532225 systemd[1]: Created slice kubepods-burstable-pod5fa7f8c6_6405_4444_b48f_a91387b277e9.slice - libcontainer container kubepods-burstable-pod5fa7f8c6_6405_4444_b48f_a91387b277e9.slice. Jan 22 01:04:05.542489 systemd[1]: Created slice kubepods-besteffort-pod08120695_b0cc_4d57_901b_351d05e2677f.slice - libcontainer container kubepods-besteffort-pod08120695_b0cc_4d57_901b_351d05e2677f.slice. Jan 22 01:04:05.573675 systemd[1]: Created slice kubepods-besteffort-podf8548d33_1edf_4a43_bdaf_8508761dc0af.slice - libcontainer container kubepods-besteffort-podf8548d33_1edf_4a43_bdaf_8508761dc0af.slice. Jan 22 01:04:05.576650 kubelet[2860]: I0122 01:04:05.576338 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3572850e-6c2e-4ed9-a568-75ca88ead4a5-calico-apiserver-certs\") pod \"calico-apiserver-7968ffc4b-nnclr\" (UID: \"3572850e-6c2e-4ed9-a568-75ca88ead4a5\") " pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" Jan 22 01:04:05.580132 kubelet[2860]: I0122 01:04:05.578336 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03592d59-bdcf-436c-be98-9c688e9b6f7e-goldmane-ca-bundle\") pod \"goldmane-666569f655-gwr5j\" (UID: \"03592d59-bdcf-436c-be98-9c688e9b6f7e\") " pod="calico-system/goldmane-666569f655-gwr5j" Jan 22 01:04:05.580554 kubelet[2860]: I0122 01:04:05.580478 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/410a0576-5e2f-4491-9946-abeea17a07fc-tigera-ca-bundle\") pod \"calico-kube-controllers-d659955cd-4lv6v\" (UID: \"410a0576-5e2f-4491-9946-abeea17a07fc\") " pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" Jan 22 01:04:05.580652 kubelet[2860]: I0122 01:04:05.580621 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-backend-key-pair\") pod \"whisker-754b879bb5-9l9ch\" (UID: \"f8548d33-1edf-4a43-bdaf-8508761dc0af\") " pod="calico-system/whisker-754b879bb5-9l9ch" Jan 22 01:04:05.580652 kubelet[2860]: I0122 01:04:05.580644 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2d4\" (UniqueName: \"kubernetes.io/projected/3572850e-6c2e-4ed9-a568-75ca88ead4a5-kube-api-access-8g2d4\") pod \"calico-apiserver-7968ffc4b-nnclr\" (UID: \"3572850e-6c2e-4ed9-a568-75ca88ead4a5\") " pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" Jan 22 01:04:05.580731 kubelet[2860]: I0122 01:04:05.580699 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6k5\" (UniqueName: \"kubernetes.io/projected/03592d59-bdcf-436c-be98-9c688e9b6f7e-kube-api-access-fv6k5\") pod \"goldmane-666569f655-gwr5j\" (UID: \"03592d59-bdcf-436c-be98-9c688e9b6f7e\") " pod="calico-system/goldmane-666569f655-gwr5j" Jan 22 01:04:05.580731 kubelet[2860]: I0122 01:04:05.580720 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvtg\" (UniqueName: \"kubernetes.io/projected/5fa7f8c6-6405-4444-b48f-a91387b277e9-kube-api-access-rnvtg\") pod \"coredns-674b8bbfcf-2nn4x\" (UID: \"5fa7f8c6-6405-4444-b48f-a91387b277e9\") " pod="kube-system/coredns-674b8bbfcf-2nn4x" Jan 22 01:04:05.580817 kubelet[2860]: I0122 01:04:05.580737 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45kx\" (UniqueName: \"kubernetes.io/projected/f8548d33-1edf-4a43-bdaf-8508761dc0af-kube-api-access-m45kx\") pod \"whisker-754b879bb5-9l9ch\" (UID: \"f8548d33-1edf-4a43-bdaf-8508761dc0af\") " pod="calico-system/whisker-754b879bb5-9l9ch" Jan 22 01:04:05.580817 kubelet[2860]: I0122 01:04:05.580754 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/08120695-b0cc-4d57-901b-351d05e2677f-calico-apiserver-certs\") pod \"calico-apiserver-7968ffc4b-xxfmg\" (UID: \"08120695-b0cc-4d57-901b-351d05e2677f\") " pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" Jan 22 01:04:05.580817 kubelet[2860]: I0122 01:04:05.580771 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4gd\" (UniqueName: \"kubernetes.io/projected/08120695-b0cc-4d57-901b-351d05e2677f-kube-api-access-sw4gd\") pod \"calico-apiserver-7968ffc4b-xxfmg\" (UID: \"08120695-b0cc-4d57-901b-351d05e2677f\") " pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" Jan 22 01:04:05.580817 kubelet[2860]: I0122 01:04:05.580791 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03592d59-bdcf-436c-be98-9c688e9b6f7e-config\") pod \"goldmane-666569f655-gwr5j\" (UID: \"03592d59-bdcf-436c-be98-9c688e9b6f7e\") " pod="calico-system/goldmane-666569f655-gwr5j" Jan 22 01:04:05.580817 kubelet[2860]: I0122 01:04:05.580806 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fa7f8c6-6405-4444-b48f-a91387b277e9-config-volume\") pod \"coredns-674b8bbfcf-2nn4x\" (UID: \"5fa7f8c6-6405-4444-b48f-a91387b277e9\") " pod="kube-system/coredns-674b8bbfcf-2nn4x" Jan 22 01:04:05.580992 kubelet[2860]: I0122 01:04:05.580819 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52c71295-c428-49b6-883f-725a15bb85e1-config-volume\") pod \"coredns-674b8bbfcf-rbgbh\" (UID: \"52c71295-c428-49b6-883f-725a15bb85e1\") " pod="kube-system/coredns-674b8bbfcf-rbgbh" Jan 22 01:04:05.580992 kubelet[2860]: I0122 01:04:05.580832 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6skf6\" (UniqueName: \"kubernetes.io/projected/52c71295-c428-49b6-883f-725a15bb85e1-kube-api-access-6skf6\") pod \"coredns-674b8bbfcf-rbgbh\" (UID: \"52c71295-c428-49b6-883f-725a15bb85e1\") " pod="kube-system/coredns-674b8bbfcf-rbgbh" Jan 22 01:04:05.580992 kubelet[2860]: I0122 01:04:05.580848 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pdb5\" (UniqueName: \"kubernetes.io/projected/410a0576-5e2f-4491-9946-abeea17a07fc-kube-api-access-4pdb5\") pod \"calico-kube-controllers-d659955cd-4lv6v\" (UID: \"410a0576-5e2f-4491-9946-abeea17a07fc\") " pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" Jan 22 01:04:05.580992 kubelet[2860]: I0122 01:04:05.580861 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/03592d59-bdcf-436c-be98-9c688e9b6f7e-goldmane-key-pair\") pod \"goldmane-666569f655-gwr5j\" (UID: \"03592d59-bdcf-436c-be98-9c688e9b6f7e\") " pod="calico-system/goldmane-666569f655-gwr5j" Jan 22 01:04:05.580992 kubelet[2860]: I0122 01:04:05.580875 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-ca-bundle\") pod \"whisker-754b879bb5-9l9ch\" (UID: \"f8548d33-1edf-4a43-bdaf-8508761dc0af\") " pod="calico-system/whisker-754b879bb5-9l9ch" Jan 22 01:04:05.595216 systemd[1]: Created slice kubepods-burstable-pod52c71295_c428_49b6_883f_725a15bb85e1.slice - libcontainer container kubepods-burstable-pod52c71295_c428_49b6_883f_725a15bb85e1.slice. Jan 22 01:04:05.610900 systemd[1]: Created slice kubepods-besteffort-pod03592d59_bdcf_436c_be98_9c688e9b6f7e.slice - libcontainer container kubepods-besteffort-pod03592d59_bdcf_436c_be98_9c688e9b6f7e.slice. Jan 22 01:04:05.759823 systemd[1]: Created slice kubepods-besteffort-pod3eaef106_75d7_42c8_a82e_57ae58f4f9cb.slice - libcontainer container kubepods-besteffort-pod3eaef106_75d7_42c8_a82e_57ae58f4f9cb.slice. Jan 22 01:04:05.764467 containerd[1637]: time="2026-01-22T01:04:05.764258239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8grjm,Uid:3eaef106-75d7-42c8-a82e-57ae58f4f9cb,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:05.791765 containerd[1637]: time="2026-01-22T01:04:05.791714304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-nnclr,Uid:3572850e-6c2e-4ed9-a568-75ca88ead4a5,Namespace:calico-apiserver,Attempt:0,}" Jan 22 01:04:05.815335 containerd[1637]: time="2026-01-22T01:04:05.815244976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d659955cd-4lv6v,Uid:410a0576-5e2f-4491-9946-abeea17a07fc,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:05.853699 containerd[1637]: time="2026-01-22T01:04:05.853252242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-xxfmg,Uid:08120695-b0cc-4d57-901b-351d05e2677f,Namespace:calico-apiserver,Attempt:0,}" Jan 22 01:04:05.857470 kubelet[2860]: E0122 01:04:05.857310 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:05.861939 containerd[1637]: time="2026-01-22T01:04:05.861697575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nn4x,Uid:5fa7f8c6-6405-4444-b48f-a91387b277e9,Namespace:kube-system,Attempt:0,}" Jan 22 01:04:05.884603 containerd[1637]: time="2026-01-22T01:04:05.884561114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754b879bb5-9l9ch,Uid:f8548d33-1edf-4a43-bdaf-8508761dc0af,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:05.907912 kubelet[2860]: E0122 01:04:05.907657 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:05.908922 containerd[1637]: time="2026-01-22T01:04:05.908878293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rbgbh,Uid:52c71295-c428-49b6-883f-725a15bb85e1,Namespace:kube-system,Attempt:0,}" Jan 22 01:04:05.928309 containerd[1637]: time="2026-01-22T01:04:05.928124956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gwr5j,Uid:03592d59-bdcf-436c-be98-9c688e9b6f7e,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:06.091490 kubelet[2860]: E0122 01:04:06.090937 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:06.095473 containerd[1637]: time="2026-01-22T01:04:06.095160254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 22 01:04:06.141793 containerd[1637]: time="2026-01-22T01:04:06.141021826Z" level=error msg="Failed to destroy network for sandbox \"adb6852a68ad0ef64eafce56711fd9c87fc4c5305fb93b4ca801b451efbe4c49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.145726 containerd[1637]: time="2026-01-22T01:04:06.145603751Z" level=error msg="Failed to destroy network for sandbox \"737b01f88725a6dccadae90938a4394d0e10e402c202eb563ff07d25cea1bf24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.156313 containerd[1637]: time="2026-01-22T01:04:06.155672797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8grjm,Uid:3eaef106-75d7-42c8-a82e-57ae58f4f9cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb6852a68ad0ef64eafce56711fd9c87fc4c5305fb93b4ca801b451efbe4c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.156767 kubelet[2860]: E0122 01:04:06.156263 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb6852a68ad0ef64eafce56711fd9c87fc4c5305fb93b4ca801b451efbe4c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.156767 kubelet[2860]: E0122 01:04:06.156341 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb6852a68ad0ef64eafce56711fd9c87fc4c5305fb93b4ca801b451efbe4c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8grjm" Jan 22 01:04:06.156767 kubelet[2860]: E0122 01:04:06.156460 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb6852a68ad0ef64eafce56711fd9c87fc4c5305fb93b4ca801b451efbe4c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8grjm" Jan 22 01:04:06.156931 kubelet[2860]: E0122 01:04:06.156551 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adb6852a68ad0ef64eafce56711fd9c87fc4c5305fb93b4ca801b451efbe4c49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:06.159731 containerd[1637]: time="2026-01-22T01:04:06.159679013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754b879bb5-9l9ch,Uid:f8548d33-1edf-4a43-bdaf-8508761dc0af,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"737b01f88725a6dccadae90938a4394d0e10e402c202eb563ff07d25cea1bf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.162506 kubelet[2860]: E0122 01:04:06.162139 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737b01f88725a6dccadae90938a4394d0e10e402c202eb563ff07d25cea1bf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.162506 kubelet[2860]: E0122 01:04:06.162213 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737b01f88725a6dccadae90938a4394d0e10e402c202eb563ff07d25cea1bf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754b879bb5-9l9ch" Jan 22 01:04:06.162506 kubelet[2860]: E0122 01:04:06.162246 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737b01f88725a6dccadae90938a4394d0e10e402c202eb563ff07d25cea1bf24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754b879bb5-9l9ch" Jan 22 01:04:06.162679 kubelet[2860]: E0122 01:04:06.162314 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754b879bb5-9l9ch_calico-system(f8548d33-1edf-4a43-bdaf-8508761dc0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754b879bb5-9l9ch_calico-system(f8548d33-1edf-4a43-bdaf-8508761dc0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"737b01f88725a6dccadae90938a4394d0e10e402c202eb563ff07d25cea1bf24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754b879bb5-9l9ch" podUID="f8548d33-1edf-4a43-bdaf-8508761dc0af" Jan 22 01:04:06.168309 containerd[1637]: time="2026-01-22T01:04:06.168196564Z" level=error msg="Failed to destroy network for sandbox \"9dc81089392f3c87a13b8c75acb5a6f449d0c121d66cb9b7c53c5bce6f6de355\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.179756 containerd[1637]: time="2026-01-22T01:04:06.179524823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-nnclr,Uid:3572850e-6c2e-4ed9-a568-75ca88ead4a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc81089392f3c87a13b8c75acb5a6f449d0c121d66cb9b7c53c5bce6f6de355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.180461 kubelet[2860]: E0122 01:04:06.180212 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc81089392f3c87a13b8c75acb5a6f449d0c121d66cb9b7c53c5bce6f6de355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.180461 kubelet[2860]: E0122 01:04:06.180303 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc81089392f3c87a13b8c75acb5a6f449d0c121d66cb9b7c53c5bce6f6de355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" Jan 22 01:04:06.180461 kubelet[2860]: E0122 01:04:06.180336 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc81089392f3c87a13b8c75acb5a6f449d0c121d66cb9b7c53c5bce6f6de355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" Jan 22 01:04:06.180584 kubelet[2860]: E0122 01:04:06.180506 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7968ffc4b-nnclr_calico-apiserver(3572850e-6c2e-4ed9-a568-75ca88ead4a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7968ffc4b-nnclr_calico-apiserver(3572850e-6c2e-4ed9-a568-75ca88ead4a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dc81089392f3c87a13b8c75acb5a6f449d0c121d66cb9b7c53c5bce6f6de355\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:04:06.187947 containerd[1637]: time="2026-01-22T01:04:06.187815996Z" level=error msg="Failed to destroy network for sandbox \"3603487dde002e119c4105da41dcd768063266a8503122d46eeb910b788808ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.190689 containerd[1637]: time="2026-01-22T01:04:06.190533161Z" level=error msg="Failed to destroy network for sandbox \"ef35435aca43a2fc85e02db502a6149a70f7fa01a809cadb5a9150f355693788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.194344 containerd[1637]: time="2026-01-22T01:04:06.193985865Z" level=error msg="Failed to destroy network for sandbox \"6e632c3731587d4f8a3de28410cd6e2e16ca9bf074a5f296b54d5a29d2e1418e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.198466 containerd[1637]: time="2026-01-22T01:04:06.197842312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rbgbh,Uid:52c71295-c428-49b6-883f-725a15bb85e1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef35435aca43a2fc85e02db502a6149a70f7fa01a809cadb5a9150f355693788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.198978 kubelet[2860]: E0122 01:04:06.198597 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef35435aca43a2fc85e02db502a6149a70f7fa01a809cadb5a9150f355693788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.198978 kubelet[2860]: E0122 01:04:06.198645 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef35435aca43a2fc85e02db502a6149a70f7fa01a809cadb5a9150f355693788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rbgbh" Jan 22 01:04:06.198978 kubelet[2860]: E0122 01:04:06.198718 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef35435aca43a2fc85e02db502a6149a70f7fa01a809cadb5a9150f355693788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rbgbh" Jan 22 01:04:06.199202 kubelet[2860]: E0122 01:04:06.198764 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rbgbh_kube-system(52c71295-c428-49b6-883f-725a15bb85e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rbgbh_kube-system(52c71295-c428-49b6-883f-725a15bb85e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef35435aca43a2fc85e02db502a6149a70f7fa01a809cadb5a9150f355693788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rbgbh" podUID="52c71295-c428-49b6-883f-725a15bb85e1" Jan 22 01:04:06.199845 containerd[1637]: time="2026-01-22T01:04:06.199617298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d659955cd-4lv6v,Uid:410a0576-5e2f-4491-9946-abeea17a07fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3603487dde002e119c4105da41dcd768063266a8503122d46eeb910b788808ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.201912 kubelet[2860]: E0122 01:04:06.201315 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3603487dde002e119c4105da41dcd768063266a8503122d46eeb910b788808ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.203275 kubelet[2860]: E0122 01:04:06.202137 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3603487dde002e119c4105da41dcd768063266a8503122d46eeb910b788808ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" Jan 22 01:04:06.203275 kubelet[2860]: E0122 01:04:06.202177 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3603487dde002e119c4105da41dcd768063266a8503122d46eeb910b788808ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" Jan 22 01:04:06.203275 kubelet[2860]: E0122 01:04:06.203245 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d659955cd-4lv6v_calico-system(410a0576-5e2f-4491-9946-abeea17a07fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d659955cd-4lv6v_calico-system(410a0576-5e2f-4491-9946-abeea17a07fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3603487dde002e119c4105da41dcd768063266a8503122d46eeb910b788808ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:04:06.215815 containerd[1637]: time="2026-01-22T01:04:06.215711595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gwr5j,Uid:03592d59-bdcf-436c-be98-9c688e9b6f7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e632c3731587d4f8a3de28410cd6e2e16ca9bf074a5f296b54d5a29d2e1418e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.216491 kubelet[2860]: E0122 01:04:06.216215 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e632c3731587d4f8a3de28410cd6e2e16ca9bf074a5f296b54d5a29d2e1418e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.216491 kubelet[2860]: E0122 01:04:06.216305 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e632c3731587d4f8a3de28410cd6e2e16ca9bf074a5f296b54d5a29d2e1418e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gwr5j" Jan 22 01:04:06.216491 kubelet[2860]: E0122 01:04:06.216327 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e632c3731587d4f8a3de28410cd6e2e16ca9bf074a5f296b54d5a29d2e1418e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gwr5j" Jan 22 01:04:06.219522 kubelet[2860]: E0122 01:04:06.219250 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gwr5j_calico-system(03592d59-bdcf-436c-be98-9c688e9b6f7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gwr5j_calico-system(03592d59-bdcf-436c-be98-9c688e9b6f7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e632c3731587d4f8a3de28410cd6e2e16ca9bf074a5f296b54d5a29d2e1418e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:04:06.231587 containerd[1637]: time="2026-01-22T01:04:06.231521537Z" level=error msg="Failed to destroy network for sandbox \"b1bdbefc3769fe132cd2399f4aae890e9745506f95e2c04696713b87668850fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.237304 containerd[1637]: time="2026-01-22T01:04:06.237223747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-xxfmg,Uid:08120695-b0cc-4d57-901b-351d05e2677f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1bdbefc3769fe132cd2399f4aae890e9745506f95e2c04696713b87668850fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.239557 kubelet[2860]: E0122 01:04:06.237668 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1bdbefc3769fe132cd2399f4aae890e9745506f95e2c04696713b87668850fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.239557 kubelet[2860]: E0122 01:04:06.237740 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1bdbefc3769fe132cd2399f4aae890e9745506f95e2c04696713b87668850fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" Jan 22 01:04:06.239557 kubelet[2860]: E0122 01:04:06.237767 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1bdbefc3769fe132cd2399f4aae890e9745506f95e2c04696713b87668850fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" Jan 22 01:04:06.239737 kubelet[2860]: E0122 01:04:06.237901 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7968ffc4b-xxfmg_calico-apiserver(08120695-b0cc-4d57-901b-351d05e2677f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7968ffc4b-xxfmg_calico-apiserver(08120695-b0cc-4d57-901b-351d05e2677f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1bdbefc3769fe132cd2399f4aae890e9745506f95e2c04696713b87668850fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:04:06.253324 containerd[1637]: time="2026-01-22T01:04:06.253191188Z" level=error msg="Failed to destroy network for sandbox \"389611f725b045c32774e42df779b3d28ef41f3e91dc912baa85deeb39e4eddd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.311482 containerd[1637]: time="2026-01-22T01:04:06.311203418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nn4x,Uid:5fa7f8c6-6405-4444-b48f-a91387b277e9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"389611f725b045c32774e42df779b3d28ef41f3e91dc912baa85deeb39e4eddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.312091 kubelet[2860]: E0122 01:04:06.311950 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389611f725b045c32774e42df779b3d28ef41f3e91dc912baa85deeb39e4eddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 01:04:06.312091 kubelet[2860]: E0122 01:04:06.312073 2860 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389611f725b045c32774e42df779b3d28ef41f3e91dc912baa85deeb39e4eddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2nn4x" Jan 22 01:04:06.312237 kubelet[2860]: E0122 01:04:06.312103 2860 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389611f725b045c32774e42df779b3d28ef41f3e91dc912baa85deeb39e4eddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2nn4x" Jan 22 01:04:06.312237 kubelet[2860]: E0122 01:04:06.312166 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2nn4x_kube-system(5fa7f8c6-6405-4444-b48f-a91387b277e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2nn4x_kube-system(5fa7f8c6-6405-4444-b48f-a91387b277e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"389611f725b045c32774e42df779b3d28ef41f3e91dc912baa85deeb39e4eddd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2nn4x" podUID="5fa7f8c6-6405-4444-b48f-a91387b277e9" Jan 22 01:04:12.266568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775804736.mount: Deactivated successfully. Jan 22 01:04:12.406525 containerd[1637]: time="2026-01-22T01:04:12.406304596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:04:12.408068 containerd[1637]: time="2026-01-22T01:04:12.408032641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 22 01:04:12.410561 containerd[1637]: time="2026-01-22T01:04:12.410346008Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:04:12.413758 containerd[1637]: time="2026-01-22T01:04:12.413647315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 01:04:12.414946 containerd[1637]: time="2026-01-22T01:04:12.414672156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.319470976s" Jan 22 01:04:12.414946 containerd[1637]: time="2026-01-22T01:04:12.414766491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 22 01:04:12.452212 containerd[1637]: time="2026-01-22T01:04:12.451668569Z" level=info msg="CreateContainer within sandbox \"fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 22 01:04:12.470338 containerd[1637]: time="2026-01-22T01:04:12.470163346Z" level=info msg="Container 756d6db69e33cf1c95d4ee88f259df01ed849681ae8335c997d23b70e054f00f: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:04:12.491524 containerd[1637]: time="2026-01-22T01:04:12.491236796Z" level=info msg="CreateContainer within sandbox \"fc3ba07e02a06676ff971512d5d65b05619912668c65e13a83ecc3ee2f71378e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"756d6db69e33cf1c95d4ee88f259df01ed849681ae8335c997d23b70e054f00f\"" Jan 22 01:04:12.492593 containerd[1637]: time="2026-01-22T01:04:12.492227162Z" level=info msg="StartContainer for \"756d6db69e33cf1c95d4ee88f259df01ed849681ae8335c997d23b70e054f00f\"" Jan 22 01:04:12.498144 containerd[1637]: time="2026-01-22T01:04:12.498056922Z" level=info msg="connecting to shim 756d6db69e33cf1c95d4ee88f259df01ed849681ae8335c997d23b70e054f00f" address="unix:///run/containerd/s/506a759036fed7bd32b4ec8aa6f1c22554eb3082877e3f2103422521552fa53a" protocol=ttrpc version=3 Jan 22 01:04:12.590873 systemd[1]: Started cri-containerd-756d6db69e33cf1c95d4ee88f259df01ed849681ae8335c997d23b70e054f00f.scope - libcontainer container 756d6db69e33cf1c95d4ee88f259df01ed849681ae8335c997d23b70e054f00f. Jan 22 01:04:12.692548 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 22 01:04:12.692756 kernel: audit: type=1334 audit(1769043852.687:564): prog-id=172 op=LOAD Jan 22 01:04:12.687000 audit: BPF prog-id=172 op=LOAD Jan 22 01:04:12.687000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.711521 kernel: audit: type=1300 audit(1769043852.687:564): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.711660 kernel: audit: type=1327 audit(1769043852.687:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:12.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:12.687000 audit: BPF prog-id=173 op=LOAD Jan 22 01:04:12.731348 kernel: audit: type=1334 audit(1769043852.687:565): prog-id=173 op=LOAD Jan 22 01:04:12.731576 kernel: audit: type=1300 audit(1769043852.687:565): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.687000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:12.767465 kernel: audit: type=1327 audit(1769043852.687:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:12.767566 kernel: audit: type=1334 audit(1769043852.687:566): prog-id=173 op=UNLOAD Jan 22 01:04:12.687000 audit: BPF prog-id=173 op=UNLOAD Jan 22 01:04:12.687000 audit[3946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.786510 containerd[1637]: time="2026-01-22T01:04:12.786292486Z" level=info msg="StartContainer for \"756d6db69e33cf1c95d4ee88f259df01ed849681ae8335c997d23b70e054f00f\" returns successfully" Jan 22 01:04:12.794155 kernel: audit: type=1300 audit(1769043852.687:566): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:12.687000 audit: BPF prog-id=172 op=UNLOAD Jan 22 01:04:12.816032 kernel: audit: type=1327 audit(1769043852.687:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:12.816099 kernel: audit: type=1334 audit(1769043852.687:567): prog-id=172 op=UNLOAD Jan 22 01:04:12.687000 audit[3946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:12.687000 audit: BPF prog-id=174 op=LOAD Jan 22 01:04:12.687000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3404 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:12.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735366436646236396533336366316339356434656538386632353964 Jan 22 01:04:13.022695 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 22 01:04:13.022850 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 22 01:04:13.124149 kubelet[2860]: E0122 01:04:13.124078 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:13.170501 kubelet[2860]: I0122 01:04:13.169705 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qqnfk" podStartSLOduration=1.915201464 podStartE2EDuration="17.169687466s" podCreationTimestamp="2026-01-22 01:03:56 +0000 UTC" firstStartedPulling="2026-01-22 01:03:57.161348356 +0000 UTC m=+26.818541555" lastFinishedPulling="2026-01-22 01:04:12.415834377 +0000 UTC m=+42.073027557" observedRunningTime="2026-01-22 01:04:13.164077462 +0000 UTC m=+42.821270661" watchObservedRunningTime="2026-01-22 01:04:13.169687466 +0000 UTC m=+42.826880646" Jan 22 01:04:13.475430 kubelet[2860]: I0122 01:04:13.475172 2860 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-ca-bundle\") pod \"f8548d33-1edf-4a43-bdaf-8508761dc0af\" (UID: \"f8548d33-1edf-4a43-bdaf-8508761dc0af\") " Jan 22 01:04:13.475430 kubelet[2860]: I0122 01:04:13.475340 2860 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m45kx\" (UniqueName: \"kubernetes.io/projected/f8548d33-1edf-4a43-bdaf-8508761dc0af-kube-api-access-m45kx\") pod \"f8548d33-1edf-4a43-bdaf-8508761dc0af\" (UID: \"f8548d33-1edf-4a43-bdaf-8508761dc0af\") " Jan 22 01:04:13.475610 kubelet[2860]: I0122 01:04:13.475525 2860 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-backend-key-pair\") pod \"f8548d33-1edf-4a43-bdaf-8508761dc0af\" (UID: \"f8548d33-1edf-4a43-bdaf-8508761dc0af\") " Jan 22 01:04:13.475741 kubelet[2860]: I0122 01:04:13.475650 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f8548d33-1edf-4a43-bdaf-8508761dc0af" (UID: "f8548d33-1edf-4a43-bdaf-8508761dc0af"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 01:04:13.485520 kubelet[2860]: I0122 01:04:13.485293 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8548d33-1edf-4a43-bdaf-8508761dc0af-kube-api-access-m45kx" (OuterVolumeSpecName: "kube-api-access-m45kx") pod "f8548d33-1edf-4a43-bdaf-8508761dc0af" (UID: "f8548d33-1edf-4a43-bdaf-8508761dc0af"). InnerVolumeSpecName "kube-api-access-m45kx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 01:04:13.488162 kubelet[2860]: I0122 01:04:13.488080 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f8548d33-1edf-4a43-bdaf-8508761dc0af" (UID: "f8548d33-1edf-4a43-bdaf-8508761dc0af"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 01:04:13.490915 systemd[1]: var-lib-kubelet-pods-f8548d33\x2d1edf\x2d4a43\x2dbdaf\x2d8508761dc0af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm45kx.mount: Deactivated successfully. Jan 22 01:04:13.491794 systemd[1]: var-lib-kubelet-pods-f8548d33\x2d1edf\x2d4a43\x2dbdaf\x2d8508761dc0af-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 22 01:04:13.577189 kubelet[2860]: I0122 01:04:13.577049 2860 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 22 01:04:13.577189 kubelet[2860]: I0122 01:04:13.577102 2860 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8548d33-1edf-4a43-bdaf-8508761dc0af-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 22 01:04:13.577189 kubelet[2860]: I0122 01:04:13.577116 2860 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m45kx\" (UniqueName: \"kubernetes.io/projected/f8548d33-1edf-4a43-bdaf-8508761dc0af-kube-api-access-m45kx\") on node \"localhost\" DevicePath \"\"" Jan 22 01:04:14.127770 kubelet[2860]: E0122 01:04:14.127223 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:14.140250 systemd[1]: Removed slice kubepods-besteffort-podf8548d33_1edf_4a43_bdaf_8508761dc0af.slice - libcontainer container kubepods-besteffort-podf8548d33_1edf_4a43_bdaf_8508761dc0af.slice. Jan 22 01:04:14.281306 systemd[1]: Created slice kubepods-besteffort-pod5adc9a7e_5d84_4421_b95f_9f8854c5ffaa.slice - libcontainer container kubepods-besteffort-pod5adc9a7e_5d84_4421_b95f_9f8854c5ffaa.slice. Jan 22 01:04:14.384597 kubelet[2860]: I0122 01:04:14.384223 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5adc9a7e-5d84-4421-b95f-9f8854c5ffaa-whisker-backend-key-pair\") pod \"whisker-589cfcc65-g4r68\" (UID: \"5adc9a7e-5d84-4421-b95f-9f8854c5ffaa\") " pod="calico-system/whisker-589cfcc65-g4r68" Jan 22 01:04:14.384597 kubelet[2860]: I0122 01:04:14.384324 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lms\" (UniqueName: \"kubernetes.io/projected/5adc9a7e-5d84-4421-b95f-9f8854c5ffaa-kube-api-access-t6lms\") pod \"whisker-589cfcc65-g4r68\" (UID: \"5adc9a7e-5d84-4421-b95f-9f8854c5ffaa\") " pod="calico-system/whisker-589cfcc65-g4r68" Jan 22 01:04:14.384597 kubelet[2860]: I0122 01:04:14.384480 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5adc9a7e-5d84-4421-b95f-9f8854c5ffaa-whisker-ca-bundle\") pod \"whisker-589cfcc65-g4r68\" (UID: \"5adc9a7e-5d84-4421-b95f-9f8854c5ffaa\") " pod="calico-system/whisker-589cfcc65-g4r68" Jan 22 01:04:14.592769 containerd[1637]: time="2026-01-22T01:04:14.592585266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589cfcc65-g4r68,Uid:5adc9a7e-5d84-4421-b95f-9f8854c5ffaa,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:14.728702 kubelet[2860]: I0122 01:04:14.728297 2860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8548d33-1edf-4a43-bdaf-8508761dc0af" path="/var/lib/kubelet/pods/f8548d33-1edf-4a43-bdaf-8508761dc0af/volumes" Jan 22 01:04:15.035309 systemd-networkd[1547]: cali0e4013de2ae: Link UP Jan 22 01:04:15.037919 systemd-networkd[1547]: cali0e4013de2ae: Gained carrier Jan 22 01:04:15.100020 containerd[1637]: 2026-01-22 01:04:14.653 [INFO][4067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 01:04:15.100020 containerd[1637]: 2026-01-22 01:04:14.690 [INFO][4067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--589cfcc65--g4r68-eth0 whisker-589cfcc65- calico-system 5adc9a7e-5d84-4421-b95f-9f8854c5ffaa 929 0 2026-01-22 01:04:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:589cfcc65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-589cfcc65-g4r68 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0e4013de2ae [] [] }} ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-" Jan 22 01:04:15.100020 containerd[1637]: 2026-01-22 01:04:14.690 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-eth0" Jan 22 01:04:15.100020 containerd[1637]: 2026-01-22 01:04:14.850 [INFO][4080] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" HandleID="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Workload="localhost-k8s-whisker--589cfcc65--g4r68-eth0" Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.855 [INFO][4080] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" HandleID="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Workload="localhost-k8s-whisker--589cfcc65--g4r68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010f990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-589cfcc65-g4r68", "timestamp":"2026-01-22 01:04:14.850907679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.855 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.856 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.857 [INFO][4080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.892 [INFO][4080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" host="localhost" Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.922 [INFO][4080] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.946 [INFO][4080] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.954 [INFO][4080] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.960 [INFO][4080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:15.100493 containerd[1637]: 2026-01-22 01:04:14.960 [INFO][4080] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" host="localhost" Jan 22 01:04:15.100802 containerd[1637]: 2026-01-22 01:04:14.964 [INFO][4080] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46 Jan 22 01:04:15.100802 containerd[1637]: 2026-01-22 01:04:14.975 [INFO][4080] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" host="localhost" Jan 22 01:04:15.100802 containerd[1637]: 2026-01-22 01:04:14.984 [INFO][4080] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" host="localhost" Jan 22 01:04:15.100802 containerd[1637]: 2026-01-22 01:04:14.984 [INFO][4080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" host="localhost" Jan 22 01:04:15.100802 containerd[1637]: 2026-01-22 01:04:14.984 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:15.100802 containerd[1637]: 2026-01-22 01:04:14.984 [INFO][4080] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" HandleID="k8s-pod-network.81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Workload="localhost-k8s-whisker--589cfcc65--g4r68-eth0" Jan 22 01:04:15.100994 containerd[1637]: 2026-01-22 01:04:14.994 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--589cfcc65--g4r68-eth0", GenerateName:"whisker-589cfcc65-", Namespace:"calico-system", SelfLink:"", UID:"5adc9a7e-5d84-4421-b95f-9f8854c5ffaa", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589cfcc65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-589cfcc65-g4r68", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e4013de2ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:15.100994 containerd[1637]: 2026-01-22 01:04:14.995 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-eth0" Jan 22 01:04:15.101128 containerd[1637]: 2026-01-22 01:04:14.996 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e4013de2ae ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-eth0" Jan 22 01:04:15.101128 containerd[1637]: 2026-01-22 01:04:15.044 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-eth0" Jan 22 01:04:15.104605 containerd[1637]: 2026-01-22 01:04:15.045 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--589cfcc65--g4r68-eth0", GenerateName:"whisker-589cfcc65-", Namespace:"calico-system", SelfLink:"", UID:"5adc9a7e-5d84-4421-b95f-9f8854c5ffaa", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 4, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589cfcc65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46", Pod:"whisker-589cfcc65-g4r68", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e4013de2ae", MAC:"8e:cb:49:f9:49:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:15.104902 containerd[1637]: 2026-01-22 01:04:15.082 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" Namespace="calico-system" Pod="whisker-589cfcc65-g4r68" WorkloadEndpoint="localhost-k8s-whisker--589cfcc65--g4r68-eth0" Jan 22 01:04:15.145996 kubelet[2860]: E0122 01:04:15.145893 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:15.390869 containerd[1637]: time="2026-01-22T01:04:15.390610348Z" level=info msg="connecting to shim 81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46" address="unix:///run/containerd/s/e1df798fe8dca411727217330318225cc85ea32ee0722f673d0281531d640456" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:15.484849 systemd[1]: Started cri-containerd-81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46.scope - libcontainer container 81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46. Jan 22 01:04:15.501000 audit: BPF prog-id=175 op=LOAD Jan 22 01:04:15.501000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd44a8ae0 a2=98 a3=1fffffffffffffff items=0 ppid=4110 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.501000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 01:04:15.501000 audit: BPF prog-id=175 op=UNLOAD Jan 22 01:04:15.501000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd44a8ab0 a3=0 items=0 ppid=4110 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.501000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 01:04:15.501000 audit: BPF prog-id=176 op=LOAD Jan 22 01:04:15.501000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd44a89c0 a2=94 a3=3 items=0 ppid=4110 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.501000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 01:04:15.504000 audit: BPF prog-id=176 op=UNLOAD Jan 22 01:04:15.504000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd44a89c0 a2=94 a3=3 items=0 ppid=4110 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.504000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 01:04:15.504000 audit: BPF prog-id=177 op=LOAD Jan 22 01:04:15.504000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd44a8a00 a2=94 a3=7ffcd44a8be0 items=0 ppid=4110 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.504000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 01:04:15.504000 audit: BPF prog-id=177 op=UNLOAD Jan 22 01:04:15.504000 audit[4294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd44a8a00 a2=94 a3=7ffcd44a8be0 items=0 ppid=4110 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.504000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 01:04:15.511000 audit: BPF prog-id=178 op=LOAD Jan 22 01:04:15.511000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff483193f0 a2=98 a3=3 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.511000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.511000 audit: BPF prog-id=178 op=UNLOAD Jan 22 01:04:15.511000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff483193c0 a3=0 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.511000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.512000 audit: BPF prog-id=179 op=LOAD Jan 22 01:04:15.512000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff483191e0 a2=94 a3=54428f items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.512000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.512000 audit: BPF prog-id=179 op=UNLOAD Jan 22 01:04:15.512000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff483191e0 a2=94 a3=54428f items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.512000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.512000 audit: BPF prog-id=180 op=LOAD Jan 22 01:04:15.512000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff48319210 a2=94 a3=2 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.512000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.512000 audit: BPF prog-id=180 op=UNLOAD Jan 22 01:04:15.512000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff48319210 a2=0 a3=2 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.512000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.532000 audit: BPF prog-id=181 op=LOAD Jan 22 01:04:15.533000 audit: BPF prog-id=182 op=LOAD Jan 22 01:04:15.533000 audit[4269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4254 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623136313939303066313163623730396135643531323838346532 Jan 22 01:04:15.533000 audit: BPF prog-id=182 op=UNLOAD Jan 22 01:04:15.533000 audit[4269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4254 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623136313939303066313163623730396135643531323838346532 Jan 22 01:04:15.535000 audit: BPF prog-id=183 op=LOAD Jan 22 01:04:15.535000 audit[4269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4254 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623136313939303066313163623730396135643531323838346532 Jan 22 01:04:15.535000 audit: BPF prog-id=184 op=LOAD Jan 22 01:04:15.535000 audit[4269]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4254 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623136313939303066313163623730396135643531323838346532 Jan 22 01:04:15.535000 audit: BPF prog-id=184 op=UNLOAD Jan 22 01:04:15.535000 audit[4269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4254 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623136313939303066313163623730396135643531323838346532 Jan 22 01:04:15.535000 audit: BPF prog-id=183 op=UNLOAD Jan 22 01:04:15.535000 audit[4269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4254 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623136313939303066313163623730396135643531323838346532 Jan 22 01:04:15.535000 audit: BPF prog-id=185 op=LOAD Jan 22 01:04:15.535000 audit[4269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4254 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623136313939303066313163623730396135643531323838346532 Jan 22 01:04:15.540083 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:15.644724 containerd[1637]: time="2026-01-22T01:04:15.644291212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589cfcc65-g4r68,Uid:5adc9a7e-5d84-4421-b95f-9f8854c5ffaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"81b1619900f11cb709a5d512884e237641a6a5ec4178397eef70680968b55c46\"" Jan 22 01:04:15.659179 containerd[1637]: time="2026-01-22T01:04:15.658810160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 01:04:15.746171 containerd[1637]: time="2026-01-22T01:04:15.745792425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:15.750285 containerd[1637]: time="2026-01-22T01:04:15.749356589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:15.750285 containerd[1637]: time="2026-01-22T01:04:15.750210268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 01:04:15.751112 kubelet[2860]: E0122 01:04:15.751062 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:04:15.751516 kubelet[2860]: E0122 01:04:15.751248 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:04:15.751829 kubelet[2860]: E0122 01:04:15.751731 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be94bbf4992b4f04a6ddb3aecaced976,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:15.766334 containerd[1637]: time="2026-01-22T01:04:15.766190523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 01:04:15.830813 containerd[1637]: time="2026-01-22T01:04:15.830604543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:15.832830 containerd[1637]: time="2026-01-22T01:04:15.832640171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 01:04:15.832830 containerd[1637]: time="2026-01-22T01:04:15.832683800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:15.833128 kubelet[2860]: E0122 01:04:15.833043 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:04:15.833492 kubelet[2860]: E0122 01:04:15.833137 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:04:15.833537 kubelet[2860]: E0122 01:04:15.833286 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:15.835515 kubelet[2860]: E0122 01:04:15.835146 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:04:15.850000 audit: BPF prog-id=186 op=LOAD Jan 22 01:04:15.850000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff483190d0 a2=94 a3=1 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.850000 audit: BPF prog-id=186 op=UNLOAD Jan 22 01:04:15.850000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff483190d0 a2=94 a3=1 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.865000 audit: BPF prog-id=187 op=LOAD Jan 22 01:04:15.865000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff483190c0 a2=94 a3=4 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.865000 audit: BPF prog-id=187 op=UNLOAD Jan 22 01:04:15.865000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff483190c0 a2=0 a3=4 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.865000 audit: BPF prog-id=188 op=LOAD Jan 22 01:04:15.865000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff48318f20 a2=94 a3=5 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.866000 audit: BPF prog-id=188 op=UNLOAD Jan 22 01:04:15.866000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff48318f20 a2=0 a3=5 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.866000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.866000 audit: BPF prog-id=189 op=LOAD Jan 22 01:04:15.866000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff48319140 a2=94 a3=6 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.866000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.866000 audit: BPF prog-id=189 op=UNLOAD Jan 22 01:04:15.866000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff48319140 a2=0 a3=6 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.866000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.866000 audit: BPF prog-id=190 op=LOAD Jan 22 01:04:15.866000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff483188f0 a2=94 a3=88 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.866000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.867000 audit: BPF prog-id=191 op=LOAD Jan 22 01:04:15.867000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff48318770 a2=94 a3=2 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.867000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.867000 audit: BPF prog-id=191 op=UNLOAD Jan 22 01:04:15.867000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff483187a0 a2=0 a3=7fff483188a0 items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.867000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.868000 audit: BPF prog-id=190 op=UNLOAD Jan 22 01:04:15.868000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=4474d10 a2=0 a3=83d7c9557ba0aa6a items=0 ppid=4110 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.868000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 01:04:15.893000 audit: BPF prog-id=192 op=LOAD Jan 22 01:04:15.893000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeedbb43a0 a2=98 a3=1999999999999999 items=0 ppid=4110 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.893000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 01:04:15.893000 audit: BPF prog-id=192 op=UNLOAD Jan 22 01:04:15.893000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeedbb4370 a3=0 items=0 ppid=4110 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.893000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 01:04:15.893000 audit: BPF prog-id=193 op=LOAD Jan 22 01:04:15.893000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeedbb4280 a2=94 a3=ffff items=0 ppid=4110 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.893000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 01:04:15.893000 audit: BPF prog-id=193 op=UNLOAD Jan 22 01:04:15.893000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeedbb4280 a2=94 a3=ffff items=0 ppid=4110 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.893000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 01:04:15.893000 audit: BPF prog-id=194 op=LOAD Jan 22 01:04:15.893000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeedbb42c0 a2=94 a3=7ffeedbb44a0 items=0 ppid=4110 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.893000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 01:04:15.893000 audit: BPF prog-id=194 op=UNLOAD Jan 22 01:04:15.893000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeedbb42c0 a2=94 a3=7ffeedbb44a0 items=0 ppid=4110 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:15.893000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 01:04:16.033286 systemd-networkd[1547]: vxlan.calico: Link UP Jan 22 01:04:16.033305 systemd-networkd[1547]: vxlan.calico: Gained carrier Jan 22 01:04:16.092000 audit: BPF prog-id=195 op=LOAD Jan 22 01:04:16.092000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeaa590ff0 a2=98 a3=0 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.092000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=195 op=UNLOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeaa590fc0 a3=0 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=196 op=LOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeaa590e00 a2=94 a3=54428f items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=196 op=UNLOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeaa590e00 a2=94 a3=54428f items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=197 op=LOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeaa590e30 a2=94 a3=2 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=197 op=UNLOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeaa590e30 a2=0 a3=2 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=198 op=LOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeaa590be0 a2=94 a3=4 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=198 op=UNLOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeaa590be0 a2=94 a3=4 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=199 op=LOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeaa590ce0 a2=94 a3=7ffeaa590e60 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.094000 audit: BPF prog-id=199 op=UNLOAD Jan 22 01:04:16.094000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeaa590ce0 a2=0 a3=7ffeaa590e60 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.096000 audit: BPF prog-id=200 op=LOAD Jan 22 01:04:16.096000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeaa590410 a2=94 a3=2 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.096000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.096000 audit: BPF prog-id=200 op=UNLOAD Jan 22 01:04:16.096000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeaa590410 a2=0 a3=2 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.096000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.096000 audit: BPF prog-id=201 op=LOAD Jan 22 01:04:16.096000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeaa590510 a2=94 a3=30 items=0 ppid=4110 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.096000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 01:04:16.117000 audit: BPF prog-id=202 op=LOAD Jan 22 01:04:16.117000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff37543060 a2=98 a3=0 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.117000 audit: BPF prog-id=202 op=UNLOAD Jan 22 01:04:16.117000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff37543030 a3=0 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.117000 audit: BPF prog-id=203 op=LOAD Jan 22 01:04:16.117000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff37542e50 a2=94 a3=54428f items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.117000 audit: BPF prog-id=203 op=UNLOAD Jan 22 01:04:16.117000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff37542e50 a2=94 a3=54428f items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.117000 audit: BPF prog-id=204 op=LOAD Jan 22 01:04:16.117000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff37542e80 a2=94 a3=2 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.117000 audit: BPF prog-id=204 op=UNLOAD Jan 22 01:04:16.117000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff37542e80 a2=0 a3=2 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.165581 kubelet[2860]: E0122 01:04:16.165325 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:04:16.222000 audit[4343]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:16.222000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3520fcf0 a2=0 a3=7ffc3520fcdc items=0 ppid=3028 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:16.230000 audit[4343]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:16.230000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc3520fcf0 a2=0 a3=0 items=0 ppid=3028 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:16.242562 systemd-networkd[1547]: cali0e4013de2ae: Gained IPv6LL Jan 22 01:04:16.374000 audit: BPF prog-id=205 op=LOAD Jan 22 01:04:16.374000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff37542d40 a2=94 a3=1 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.374000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.374000 audit: BPF prog-id=205 op=UNLOAD Jan 22 01:04:16.374000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff37542d40 a2=94 a3=1 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.374000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.387000 audit: BPF prog-id=206 op=LOAD Jan 22 01:04:16.387000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff37542d30 a2=94 a3=4 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.387000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.387000 audit: BPF prog-id=206 op=UNLOAD Jan 22 01:04:16.387000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff37542d30 a2=0 a3=4 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.387000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.388000 audit: BPF prog-id=207 op=LOAD Jan 22 01:04:16.388000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff37542b90 a2=94 a3=5 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.388000 audit: BPF prog-id=207 op=UNLOAD Jan 22 01:04:16.388000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff37542b90 a2=0 a3=5 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.388000 audit: BPF prog-id=208 op=LOAD Jan 22 01:04:16.388000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff37542db0 a2=94 a3=6 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.388000 audit: BPF prog-id=208 op=UNLOAD Jan 22 01:04:16.388000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff37542db0 a2=0 a3=6 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.389000 audit: BPF prog-id=209 op=LOAD Jan 22 01:04:16.389000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff37542560 a2=94 a3=88 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.389000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.389000 audit: BPF prog-id=210 op=LOAD Jan 22 01:04:16.389000 audit[4339]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff375423e0 a2=94 a3=2 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.389000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.389000 audit: BPF prog-id=210 op=UNLOAD Jan 22 01:04:16.389000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff37542410 a2=0 a3=7fff37542510 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.389000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.390000 audit: BPF prog-id=209 op=UNLOAD Jan 22 01:04:16.390000 audit[4339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=31fa0d10 a2=0 a3=a1256d1a0c02bbc9 items=0 ppid=4110 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.390000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 01:04:16.406000 audit: BPF prog-id=201 op=UNLOAD Jan 22 01:04:16.406000 audit[4110]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0009151c0 a2=0 a3=0 items=0 ppid=4087 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.406000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 22 01:04:16.504000 audit[4365]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4365 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:16.504000 audit[4365]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe77dcb4d0 a2=0 a3=7ffe77dcb4bc items=0 ppid=4110 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.504000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:16.512000 audit[4369]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4369 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:16.512000 audit[4369]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffce6f6abd0 a2=0 a3=7ffce6f6abbc items=0 ppid=4110 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.512000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:16.522000 audit[4364]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:16.522000 audit[4364]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc043367d0 a2=0 a3=7ffc043367bc items=0 ppid=4110 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.522000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:16.528000 audit[4368]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:16.528000 audit[4368]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe1395dcb0 a2=0 a3=7ffe1395dc9c items=0 ppid=4110 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:16.528000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:17.158175 kubelet[2860]: E0122 01:04:17.158111 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:04:17.721780 containerd[1637]: time="2026-01-22T01:04:17.721513899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gwr5j,Uid:03592d59-bdcf-436c-be98-9c688e9b6f7e,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:17.908092 systemd-networkd[1547]: vxlan.calico: Gained IPv6LL Jan 22 01:04:17.937708 systemd-networkd[1547]: cali1eecc984e52: Link UP Jan 22 01:04:17.942774 systemd-networkd[1547]: cali1eecc984e52: Gained carrier Jan 22 01:04:17.981132 containerd[1637]: 2026-01-22 01:04:17.788 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--gwr5j-eth0 goldmane-666569f655- calico-system 03592d59-bdcf-436c-be98-9c688e9b6f7e 857 0 2026-01-22 01:03:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-gwr5j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1eecc984e52 [] [] }} ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-" Jan 22 01:04:17.981132 containerd[1637]: 2026-01-22 01:04:17.789 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-eth0" Jan 22 01:04:17.981132 containerd[1637]: 2026-01-22 01:04:17.853 [INFO][4395] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" HandleID="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Workload="localhost-k8s-goldmane--666569f655--gwr5j-eth0" Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.854 [INFO][4395] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" HandleID="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Workload="localhost-k8s-goldmane--666569f655--gwr5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-gwr5j", "timestamp":"2026-01-22 01:04:17.853834763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.854 [INFO][4395] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.854 [INFO][4395] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.854 [INFO][4395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.868 [INFO][4395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" host="localhost" Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.880 [INFO][4395] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.889 [INFO][4395] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.893 [INFO][4395] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.898 [INFO][4395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:17.983664 containerd[1637]: 2026-01-22 01:04:17.898 [INFO][4395] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" host="localhost" Jan 22 01:04:17.984145 containerd[1637]: 2026-01-22 01:04:17.901 [INFO][4395] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00 Jan 22 01:04:17.984145 containerd[1637]: 2026-01-22 01:04:17.912 [INFO][4395] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" host="localhost" Jan 22 01:04:17.984145 containerd[1637]: 2026-01-22 01:04:17.922 [INFO][4395] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" host="localhost" Jan 22 01:04:17.984145 containerd[1637]: 2026-01-22 01:04:17.922 [INFO][4395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" host="localhost" Jan 22 01:04:17.984145 containerd[1637]: 2026-01-22 01:04:17.922 [INFO][4395] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:17.984145 containerd[1637]: 2026-01-22 01:04:17.922 [INFO][4395] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" HandleID="k8s-pod-network.a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Workload="localhost-k8s-goldmane--666569f655--gwr5j-eth0" Jan 22 01:04:17.986549 containerd[1637]: 2026-01-22 01:04:17.929 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--gwr5j-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"03592d59-bdcf-436c-be98-9c688e9b6f7e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-gwr5j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1eecc984e52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:17.986549 containerd[1637]: 2026-01-22 01:04:17.929 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-eth0" Jan 22 01:04:17.986783 containerd[1637]: 2026-01-22 01:04:17.929 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eecc984e52 ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-eth0" Jan 22 01:04:17.986783 containerd[1637]: 2026-01-22 01:04:17.945 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-eth0" Jan 22 01:04:17.986851 containerd[1637]: 2026-01-22 01:04:17.946 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--gwr5j-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"03592d59-bdcf-436c-be98-9c688e9b6f7e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00", Pod:"goldmane-666569f655-gwr5j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1eecc984e52", MAC:"fa:18:94:da:d8:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:17.987135 containerd[1637]: 2026-01-22 01:04:17.965 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" Namespace="calico-system" Pod="goldmane-666569f655-gwr5j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--gwr5j-eth0" Jan 22 01:04:18.018993 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 22 01:04:18.020319 kernel: audit: type=1325 audit(1769043858.009:645): table=filter:127 family=2 entries=44 op=nft_register_chain pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:18.009000 audit[4412]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:18.009000 audit[4412]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffd8f089aa0 a2=0 a3=7ffd8f089a8c items=0 ppid=4110 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.046653 containerd[1637]: time="2026-01-22T01:04:18.046547996Z" level=info msg="connecting to shim a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00" address="unix:///run/containerd/s/7b40453efe654800ff6e601dbfcc9060173609f5aaec9eedcbc6df51b164830f" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:18.051170 kernel: audit: type=1300 audit(1769043858.009:645): arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffd8f089aa0 a2=0 a3=7ffd8f089a8c items=0 ppid=4110 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.009000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:18.065505 kernel: audit: type=1327 audit(1769043858.009:645): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:18.159723 systemd[1]: Started cri-containerd-a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00.scope - libcontainer container a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00. Jan 22 01:04:18.188000 audit: BPF prog-id=211 op=LOAD Jan 22 01:04:18.195634 kernel: audit: type=1334 audit(1769043858.188:646): prog-id=211 op=LOAD Jan 22 01:04:18.195736 kernel: audit: type=1334 audit(1769043858.194:647): prog-id=212 op=LOAD Jan 22 01:04:18.194000 audit: BPF prog-id=212 op=LOAD Jan 22 01:04:18.194000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.205572 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:18.223615 kernel: audit: type=1300 audit(1769043858.194:647): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.223679 kernel: audit: type=1327 audit(1769043858.194:647): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: BPF prog-id=212 op=UNLOAD Jan 22 01:04:18.251789 kernel: audit: type=1334 audit(1769043858.194:648): prog-id=212 op=UNLOAD Jan 22 01:04:18.255096 kernel: audit: type=1300 audit(1769043858.194:648): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.194000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.292683 kernel: audit: type=1327 audit(1769043858.194:648): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: BPF prog-id=213 op=LOAD Jan 22 01:04:18.194000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: BPF prog-id=214 op=LOAD Jan 22 01:04:18.194000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: BPF prog-id=214 op=UNLOAD Jan 22 01:04:18.194000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: BPF prog-id=213 op=UNLOAD Jan 22 01:04:18.194000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.194000 audit: BPF prog-id=215 op=LOAD Jan 22 01:04:18.194000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4421 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130323539383939623138373864653437303235373731636432363335 Jan 22 01:04:18.326857 containerd[1637]: time="2026-01-22T01:04:18.326764749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gwr5j,Uid:03592d59-bdcf-436c-be98-9c688e9b6f7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0259899b1878de47025771cd2635f02f93c23fe05a9d9c205ecd271086e2f00\"" Jan 22 01:04:18.337888 containerd[1637]: time="2026-01-22T01:04:18.337756583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 01:04:18.398973 containerd[1637]: time="2026-01-22T01:04:18.398748035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:18.400804 containerd[1637]: time="2026-01-22T01:04:18.400629027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 01:04:18.400804 containerd[1637]: time="2026-01-22T01:04:18.400724479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:18.401784 kubelet[2860]: E0122 01:04:18.401690 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:04:18.401784 kubelet[2860]: E0122 01:04:18.401766 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:04:18.402254 kubelet[2860]: E0122 01:04:18.402016 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gwr5j_calico-system(03592d59-bdcf-436c-be98-9c688e9b6f7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:18.405789 kubelet[2860]: E0122 01:04:18.405538 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:04:19.168498 kubelet[2860]: E0122 01:04:19.167533 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:04:19.217000 audit[4459]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:19.217000 audit[4459]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeb1a6cea0 a2=0 a3=7ffeb1a6ce8c items=0 ppid=3028 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:19.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:19.229000 audit[4459]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:19.229000 audit[4459]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeb1a6cea0 a2=0 a3=0 items=0 ppid=3028 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:19.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:19.249998 systemd-networkd[1547]: cali1eecc984e52: Gained IPv6LL Jan 22 01:04:19.722011 containerd[1637]: time="2026-01-22T01:04:19.721776715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-xxfmg,Uid:08120695-b0cc-4d57-901b-351d05e2677f,Namespace:calico-apiserver,Attempt:0,}" Jan 22 01:04:19.722011 containerd[1637]: time="2026-01-22T01:04:19.722093852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-nnclr,Uid:3572850e-6c2e-4ed9-a568-75ca88ead4a5,Namespace:calico-apiserver,Attempt:0,}" Jan 22 01:04:19.960773 systemd-networkd[1547]: calid0ba3f9646b: Link UP Jan 22 01:04:19.961232 systemd-networkd[1547]: calid0ba3f9646b: Gained carrier Jan 22 01:04:19.981833 containerd[1637]: 2026-01-22 01:04:19.815 [INFO][4467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0 calico-apiserver-7968ffc4b- calico-apiserver 08120695-b0cc-4d57-901b-351d05e2677f 856 0 2026-01-22 01:03:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7968ffc4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7968ffc4b-xxfmg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid0ba3f9646b [] [] }} ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-" Jan 22 01:04:19.981833 containerd[1637]: 2026-01-22 01:04:19.815 [INFO][4467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" Jan 22 01:04:19.981833 containerd[1637]: 2026-01-22 01:04:19.879 [INFO][4492] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" HandleID="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Workload="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.880 [INFO][4492] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" HandleID="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Workload="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b2440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7968ffc4b-xxfmg", "timestamp":"2026-01-22 01:04:19.879703206 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.880 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.880 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.880 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.894 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" host="localhost" Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.906 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.915 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.918 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.922 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:19.982174 containerd[1637]: 2026-01-22 01:04:19.922 [INFO][4492] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" host="localhost" Jan 22 01:04:19.982854 containerd[1637]: 2026-01-22 01:04:19.926 [INFO][4492] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee Jan 22 01:04:19.982854 containerd[1637]: 2026-01-22 01:04:19.935 [INFO][4492] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" host="localhost" Jan 22 01:04:19.982854 containerd[1637]: 2026-01-22 01:04:19.947 [INFO][4492] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" host="localhost" Jan 22 01:04:19.982854 containerd[1637]: 2026-01-22 01:04:19.947 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" host="localhost" Jan 22 01:04:19.982854 containerd[1637]: 2026-01-22 01:04:19.947 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:19.982854 containerd[1637]: 2026-01-22 01:04:19.947 [INFO][4492] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" HandleID="k8s-pod-network.db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Workload="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" Jan 22 01:04:19.983101 containerd[1637]: 2026-01-22 01:04:19.951 [INFO][4467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0", GenerateName:"calico-apiserver-7968ffc4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"08120695-b0cc-4d57-901b-351d05e2677f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968ffc4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7968ffc4b-xxfmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0ba3f9646b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:19.983264 containerd[1637]: 2026-01-22 01:04:19.952 [INFO][4467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" Jan 22 01:04:19.983264 containerd[1637]: 2026-01-22 01:04:19.952 [INFO][4467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0ba3f9646b ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" Jan 22 01:04:19.983264 containerd[1637]: 2026-01-22 01:04:19.958 [INFO][4467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" Jan 22 01:04:19.985767 containerd[1637]: 2026-01-22 01:04:19.959 [INFO][4467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0", GenerateName:"calico-apiserver-7968ffc4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"08120695-b0cc-4d57-901b-351d05e2677f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968ffc4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee", Pod:"calico-apiserver-7968ffc4b-xxfmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0ba3f9646b", MAC:"02:d1:9a:0b:00:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:19.987528 containerd[1637]: 2026-01-22 01:04:19.974 [INFO][4467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-xxfmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--xxfmg-eth0" Jan 22 01:04:20.018000 audit[4514]: NETFILTER_CFG table=filter:130 family=2 entries=54 op=nft_register_chain pid=4514 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:20.018000 audit[4514]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffc23092ab0 a2=0 a3=7ffc23092a9c items=0 ppid=4110 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.018000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:20.064508 containerd[1637]: time="2026-01-22T01:04:20.063628437Z" level=info msg="connecting to shim db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee" address="unix:///run/containerd/s/4b6383a4ae44becd2c6b0e7eeccd919e3b1b922374360ee77d8124d8108831ad" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:20.084031 systemd-networkd[1547]: calie7964d5d41d: Link UP Jan 22 01:04:20.086881 systemd-networkd[1547]: calie7964d5d41d: Gained carrier Jan 22 01:04:20.118246 containerd[1637]: 2026-01-22 01:04:19.821 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0 calico-apiserver-7968ffc4b- calico-apiserver 3572850e-6c2e-4ed9-a568-75ca88ead4a5 849 0 2026-01-22 01:03:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7968ffc4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7968ffc4b-nnclr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7964d5d41d [] [] }} ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-" Jan 22 01:04:20.118246 containerd[1637]: 2026-01-22 01:04:19.821 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" Jan 22 01:04:20.118246 containerd[1637]: 2026-01-22 01:04:19.892 [INFO][4490] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" HandleID="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Workload="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:19.892 [INFO][4490] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" HandleID="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Workload="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7968ffc4b-nnclr", "timestamp":"2026-01-22 01:04:19.892236536 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:19.892 [INFO][4490] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:19.948 [INFO][4490] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:19.948 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:19.999 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" host="localhost" Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:20.016 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:20.026 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:20.030 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:20.041 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:20.118616 containerd[1637]: 2026-01-22 01:04:20.041 [INFO][4490] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" host="localhost" Jan 22 01:04:20.118891 containerd[1637]: 2026-01-22 01:04:20.046 [INFO][4490] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1 Jan 22 01:04:20.118891 containerd[1637]: 2026-01-22 01:04:20.059 [INFO][4490] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" host="localhost" Jan 22 01:04:20.118891 containerd[1637]: 2026-01-22 01:04:20.072 [INFO][4490] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" host="localhost" Jan 22 01:04:20.118891 containerd[1637]: 2026-01-22 01:04:20.072 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" host="localhost" Jan 22 01:04:20.118891 containerd[1637]: 2026-01-22 01:04:20.072 [INFO][4490] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:20.118891 containerd[1637]: 2026-01-22 01:04:20.072 [INFO][4490] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" HandleID="k8s-pod-network.680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Workload="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" Jan 22 01:04:20.119062 containerd[1637]: 2026-01-22 01:04:20.079 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0", GenerateName:"calico-apiserver-7968ffc4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3572850e-6c2e-4ed9-a568-75ca88ead4a5", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968ffc4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7968ffc4b-nnclr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7964d5d41d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:20.119174 containerd[1637]: 2026-01-22 01:04:20.079 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" Jan 22 01:04:20.119174 containerd[1637]: 2026-01-22 01:04:20.079 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7964d5d41d ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" Jan 22 01:04:20.119174 containerd[1637]: 2026-01-22 01:04:20.089 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" Jan 22 01:04:20.119237 containerd[1637]: 2026-01-22 01:04:20.091 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0", GenerateName:"calico-apiserver-7968ffc4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3572850e-6c2e-4ed9-a568-75ca88ead4a5", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968ffc4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1", Pod:"calico-apiserver-7968ffc4b-nnclr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7964d5d41d", MAC:"7a:62:2c:02:ee:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:20.119341 containerd[1637]: 2026-01-22 01:04:20.113 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" Namespace="calico-apiserver" Pod="calico-apiserver-7968ffc4b-nnclr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968ffc4b--nnclr-eth0" Jan 22 01:04:20.142657 systemd[1]: Started cri-containerd-db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee.scope - libcontainer container db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee. Jan 22 01:04:20.150000 audit[4556]: NETFILTER_CFG table=filter:131 family=2 entries=45 op=nft_register_chain pid=4556 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:20.150000 audit[4556]: SYSCALL arch=c000003e syscall=46 success=yes exit=24264 a0=3 a1=7fffd529de80 a2=0 a3=7fffd529de6c items=0 ppid=4110 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.150000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:20.173489 kubelet[2860]: E0122 01:04:20.173197 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:04:20.201000 audit: BPF prog-id=216 op=LOAD Jan 22 01:04:20.203000 audit: BPF prog-id=217 op=LOAD Jan 22 01:04:20.203000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343135336334393937393737393932636637393438623162383566 Jan 22 01:04:20.203000 audit: BPF prog-id=217 op=UNLOAD Jan 22 01:04:20.203000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343135336334393937393737393932636637393438623162383566 Jan 22 01:04:20.205000 audit: BPF prog-id=218 op=LOAD Jan 22 01:04:20.205000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343135336334393937393737393932636637393438623162383566 Jan 22 01:04:20.206000 audit: BPF prog-id=219 op=LOAD Jan 22 01:04:20.206000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343135336334393937393737393932636637393438623162383566 Jan 22 01:04:20.206000 audit: BPF prog-id=219 op=UNLOAD Jan 22 01:04:20.206000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343135336334393937393737393932636637393438623162383566 Jan 22 01:04:20.207000 audit: BPF prog-id=218 op=UNLOAD Jan 22 01:04:20.207000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343135336334393937393737393932636637393438623162383566 Jan 22 01:04:20.207000 audit: BPF prog-id=220 op=LOAD Jan 22 01:04:20.207000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462343135336334393937393737393932636637393438623162383566 Jan 22 01:04:20.210152 containerd[1637]: time="2026-01-22T01:04:20.210008822Z" level=info msg="connecting to shim 680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1" address="unix:///run/containerd/s/b850e0677c5c29ec4745da950710500a44e70217bce616f20ad5fbb9c8dd883e" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:20.213160 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:20.305183 systemd[1]: Started cri-containerd-680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1.scope - libcontainer container 680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1. Jan 22 01:04:20.337000 audit: BPF prog-id=221 op=LOAD Jan 22 01:04:20.338000 audit: BPF prog-id=222 op=LOAD Jan 22 01:04:20.338000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303834346338333865316166333935346566363535303862626533 Jan 22 01:04:20.339000 audit: BPF prog-id=222 op=UNLOAD Jan 22 01:04:20.339000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303834346338333865316166333935346566363535303862626533 Jan 22 01:04:20.339845 containerd[1637]: time="2026-01-22T01:04:20.339557892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-xxfmg,Uid:08120695-b0cc-4d57-901b-351d05e2677f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"db4153c4997977992cf7948b1b85fbf19f885e3e009aaca7a790b25e133dfdee\"" Jan 22 01:04:20.340000 audit: BPF prog-id=223 op=LOAD Jan 22 01:04:20.340000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303834346338333865316166333935346566363535303862626533 Jan 22 01:04:20.340000 audit: BPF prog-id=224 op=LOAD Jan 22 01:04:20.340000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303834346338333865316166333935346566363535303862626533 Jan 22 01:04:20.340000 audit: BPF prog-id=224 op=UNLOAD Jan 22 01:04:20.340000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303834346338333865316166333935346566363535303862626533 Jan 22 01:04:20.340000 audit: BPF prog-id=223 op=UNLOAD Jan 22 01:04:20.340000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303834346338333865316166333935346566363535303862626533 Jan 22 01:04:20.340000 audit: BPF prog-id=225 op=LOAD Jan 22 01:04:20.340000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:20.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303834346338333865316166333935346566363535303862626533 Jan 22 01:04:20.343190 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:20.347034 containerd[1637]: time="2026-01-22T01:04:20.346892238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 01:04:20.412685 containerd[1637]: time="2026-01-22T01:04:20.412634749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968ffc4b-nnclr,Uid:3572850e-6c2e-4ed9-a568-75ca88ead4a5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"680844c838e1af3954ef65508bbe3bcd2938ac35c25540f8e0e95dde819f37e1\"" Jan 22 01:04:20.424816 containerd[1637]: time="2026-01-22T01:04:20.424717606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:20.426865 containerd[1637]: time="2026-01-22T01:04:20.426715972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 01:04:20.426865 containerd[1637]: time="2026-01-22T01:04:20.426751904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:20.427340 kubelet[2860]: E0122 01:04:20.427102 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:20.427340 kubelet[2860]: E0122 01:04:20.427183 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:20.427642 kubelet[2860]: E0122 01:04:20.427570 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw4gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7968ffc4b-xxfmg_calico-apiserver(08120695-b0cc-4d57-901b-351d05e2677f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:20.428070 containerd[1637]: time="2026-01-22T01:04:20.427864795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 01:04:20.429240 kubelet[2860]: E0122 01:04:20.429167 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:04:20.490976 containerd[1637]: time="2026-01-22T01:04:20.490860461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:20.493329 containerd[1637]: time="2026-01-22T01:04:20.493149122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 01:04:20.493329 containerd[1637]: time="2026-01-22T01:04:20.493213992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:20.493721 kubelet[2860]: E0122 01:04:20.493612 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:20.493721 kubelet[2860]: E0122 01:04:20.493692 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:20.493984 kubelet[2860]: E0122 01:04:20.493842 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g2d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7968ffc4b-nnclr_calico-apiserver(3572850e-6c2e-4ed9-a568-75ca88ead4a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:20.495696 kubelet[2860]: E0122 01:04:20.495607 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:04:20.724617 kubelet[2860]: E0122 01:04:20.724575 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:20.725477 kubelet[2860]: E0122 01:04:20.724678 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:20.725566 containerd[1637]: time="2026-01-22T01:04:20.725529302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rbgbh,Uid:52c71295-c428-49b6-883f-725a15bb85e1,Namespace:kube-system,Attempt:0,}" Jan 22 01:04:20.726048 containerd[1637]: time="2026-01-22T01:04:20.725822294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8grjm,Uid:3eaef106-75d7-42c8-a82e-57ae58f4f9cb,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:20.727846 containerd[1637]: time="2026-01-22T01:04:20.727646031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d659955cd-4lv6v,Uid:410a0576-5e2f-4491-9946-abeea17a07fc,Namespace:calico-system,Attempt:0,}" Jan 22 01:04:20.727846 containerd[1637]: time="2026-01-22T01:04:20.727688089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nn4x,Uid:5fa7f8c6-6405-4444-b48f-a91387b277e9,Namespace:kube-system,Attempt:0,}" Jan 22 01:04:21.075782 systemd-networkd[1547]: cali4c652d8c939: Link UP Jan 22 01:04:21.077617 systemd-networkd[1547]: cali4c652d8c939: Gained carrier Jan 22 01:04:21.116575 containerd[1637]: 2026-01-22 01:04:20.880 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0 coredns-674b8bbfcf- kube-system 52c71295-c428-49b6-883f-725a15bb85e1 855 0 2026-01-22 01:03:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rbgbh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c652d8c939 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-" Jan 22 01:04:21.116575 containerd[1637]: 2026-01-22 01:04:20.886 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" Jan 22 01:04:21.116575 containerd[1637]: 2026-01-22 01:04:20.980 [INFO][4691] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" HandleID="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Workload="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:20.983 [INFO][4691] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" HandleID="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Workload="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rbgbh", "timestamp":"2026-01-22 01:04:20.980840055 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:20.984 [INFO][4691] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:20.984 [INFO][4691] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:20.984 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:20.999 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" host="localhost" Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:21.014 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:21.025 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:21.029 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:21.033 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.117111 containerd[1637]: 2026-01-22 01:04:21.033 [INFO][4691] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" host="localhost" Jan 22 01:04:21.119090 containerd[1637]: 2026-01-22 01:04:21.036 [INFO][4691] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639 Jan 22 01:04:21.119090 containerd[1637]: 2026-01-22 01:04:21.046 [INFO][4691] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" host="localhost" Jan 22 01:04:21.119090 containerd[1637]: 2026-01-22 01:04:21.058 [INFO][4691] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" host="localhost" Jan 22 01:04:21.119090 containerd[1637]: 2026-01-22 01:04:21.058 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" host="localhost" Jan 22 01:04:21.119090 containerd[1637]: 2026-01-22 01:04:21.059 [INFO][4691] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:21.119090 containerd[1637]: 2026-01-22 01:04:21.059 [INFO][4691] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" HandleID="k8s-pod-network.ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Workload="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" Jan 22 01:04:21.119631 containerd[1637]: 2026-01-22 01:04:21.066 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52c71295-c428-49b6-883f-725a15bb85e1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rbgbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c652d8c939", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.120572 containerd[1637]: 2026-01-22 01:04:21.066 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" Jan 22 01:04:21.120572 containerd[1637]: 2026-01-22 01:04:21.066 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c652d8c939 ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" Jan 22 01:04:21.120572 containerd[1637]: 2026-01-22 01:04:21.077 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" Jan 22 01:04:21.120694 containerd[1637]: 2026-01-22 01:04:21.079 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52c71295-c428-49b6-883f-725a15bb85e1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639", Pod:"coredns-674b8bbfcf-rbgbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c652d8c939", MAC:"76:9d:f0:5c:ad:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.120694 containerd[1637]: 2026-01-22 01:04:21.100 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" Namespace="kube-system" Pod="coredns-674b8bbfcf-rbgbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rbgbh-eth0" Jan 22 01:04:21.159000 audit[4720]: NETFILTER_CFG table=filter:132 family=2 entries=54 op=nft_register_chain pid=4720 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:21.159000 audit[4720]: SYSCALL arch=c000003e syscall=46 success=yes exit=26116 a0=3 a1=7ffc6d5995f0 a2=0 a3=7ffc6d5995dc items=0 ppid=4110 pid=4720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.159000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:21.180343 kubelet[2860]: E0122 01:04:21.180145 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:04:21.189680 kubelet[2860]: E0122 01:04:21.189634 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:04:21.225622 containerd[1637]: time="2026-01-22T01:04:21.225298677Z" level=info msg="connecting to shim ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639" address="unix:///run/containerd/s/58395cecb507837463e4fa6b2a4fa458c944aa4c4e6466df94b7bfe9d47f1838" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:21.281000 audit[4741]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:21.281000 audit[4741]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb275fa60 a2=0 a3=7ffcb275fa4c items=0 ppid=3028 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:21.293740 systemd-networkd[1547]: califbd15c850c4: Link UP Jan 22 01:04:21.296808 systemd[1]: Started cri-containerd-ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639.scope - libcontainer container ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639. Jan 22 01:04:21.302000 audit[4741]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:21.302000 audit[4741]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb275fa60 a2=0 a3=0 items=0 ppid=3028 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:21.305093 systemd-networkd[1547]: califbd15c850c4: Gained carrier Jan 22 01:04:21.332000 audit: BPF prog-id=226 op=LOAD Jan 22 01:04:21.334000 audit: BPF prog-id=227 op=LOAD Jan 22 01:04:21.334000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4730 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566303337393063653831626232613936303066396165313935313865 Jan 22 01:04:21.334000 audit: BPF prog-id=227 op=UNLOAD Jan 22 01:04:21.334000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4730 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566303337393063653831626232613936303066396165313935313865 Jan 22 01:04:21.336000 audit: BPF prog-id=228 op=LOAD Jan 22 01:04:21.336000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4730 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566303337393063653831626232613936303066396165313935313865 Jan 22 01:04:21.336000 audit: BPF prog-id=229 op=LOAD Jan 22 01:04:21.336000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4730 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566303337393063653831626232613936303066396165313935313865 Jan 22 01:04:21.336000 audit: BPF prog-id=229 op=UNLOAD Jan 22 01:04:21.336000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4730 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566303337393063653831626232613936303066396165313935313865 Jan 22 01:04:21.336000 audit: BPF prog-id=228 op=UNLOAD Jan 22 01:04:21.336000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4730 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566303337393063653831626232613936303066396165313935313865 Jan 22 01:04:21.337000 audit: BPF prog-id=230 op=LOAD Jan 22 01:04:21.337000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4730 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566303337393063653831626232613936303066396165313935313865 Jan 22 01:04:21.340795 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:21.342000 audit[4767]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:21.342000 audit[4767]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffebf7f5a50 a2=0 a3=7ffebf7f5a3c items=0 ppid=3028 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:21.352000 audit[4767]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:21.352000 audit[4767]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffebf7f5a50 a2=0 a3=0 items=0 ppid=3028 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:21.362105 systemd-networkd[1547]: calid0ba3f9646b: Gained IPv6LL Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:20.858 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8grjm-eth0 csi-node-driver- calico-system 3eaef106-75d7-42c8-a82e-57ae58f4f9cb 734 0 2026-01-22 01:03:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8grjm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califbd15c850c4 [] [] }} ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:20.865 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-eth0" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:20.994 [INFO][4676] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" HandleID="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Workload="localhost-k8s-csi--node--driver--8grjm-eth0" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:20.994 [INFO][4676] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" HandleID="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Workload="localhost-k8s-csi--node--driver--8grjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000ddbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8grjm", "timestamp":"2026-01-22 01:04:20.994092211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:20.994 [INFO][4676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.060 [INFO][4676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.062 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.101 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.128 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.142 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.150 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.156 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.156 [INFO][4676] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.167 [INFO][4676] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866 Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.194 [INFO][4676] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.250 [INFO][4676] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.252 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" host="localhost" Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.253 [INFO][4676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:21.371454 containerd[1637]: 2026-01-22 01:04:21.253 [INFO][4676] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" HandleID="k8s-pod-network.7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Workload="localhost-k8s-csi--node--driver--8grjm-eth0" Jan 22 01:04:21.372112 containerd[1637]: 2026-01-22 01:04:21.270 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8grjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eaef106-75d7-42c8-a82e-57ae58f4f9cb", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8grjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbd15c850c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.372112 containerd[1637]: 2026-01-22 01:04:21.270 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-eth0" Jan 22 01:04:21.372112 containerd[1637]: 2026-01-22 01:04:21.270 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbd15c850c4 ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-eth0" Jan 22 01:04:21.372112 containerd[1637]: 2026-01-22 01:04:21.318 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-eth0" Jan 22 01:04:21.372112 containerd[1637]: 2026-01-22 01:04:21.322 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8grjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eaef106-75d7-42c8-a82e-57ae58f4f9cb", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866", Pod:"csi-node-driver-8grjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbd15c850c4", MAC:"62:49:63:2a:db:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.372112 containerd[1637]: 2026-01-22 01:04:21.354 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" Namespace="calico-system" Pod="csi-node-driver-8grjm" WorkloadEndpoint="localhost-k8s-csi--node--driver--8grjm-eth0" Jan 22 01:04:21.443230 systemd-networkd[1547]: cali9614124c08a: Link UP Jan 22 01:04:21.445641 containerd[1637]: time="2026-01-22T01:04:21.444596749Z" level=info msg="connecting to shim 7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866" address="unix:///run/containerd/s/7733e1fc802b608595cc8ce551b490826c8404c89b7b4211e94c0629a7057213" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:21.453247 systemd-networkd[1547]: cali9614124c08a: Gained carrier Jan 22 01:04:21.499000 audit[4798]: NETFILTER_CFG table=filter:137 family=2 entries=52 op=nft_register_chain pid=4798 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:21.499000 audit[4798]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7fffeb5d4ce0 a2=0 a3=7fffeb5d4ccc items=0 ppid=4110 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.499000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:20.915 [INFO][4640] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0 coredns-674b8bbfcf- kube-system 5fa7f8c6-6405-4444-b48f-a91387b277e9 853 0 2026-01-22 01:03:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2nn4x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9614124c08a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:20.917 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.000 [INFO][4697] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" HandleID="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Workload="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.001 [INFO][4697] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" HandleID="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Workload="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fdd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2nn4x", "timestamp":"2026-01-22 01:04:21.000833471 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.002 [INFO][4697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.252 [INFO][4697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.254 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.293 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.318 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.338 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.346 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.351 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.351 [INFO][4697] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.358 [INFO][4697] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.371 [INFO][4697] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.400 [INFO][4697] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.401 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" host="localhost" Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.408 [INFO][4697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:21.501247 containerd[1637]: 2026-01-22 01:04:21.408 [INFO][4697] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" HandleID="k8s-pod-network.8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Workload="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" Jan 22 01:04:21.504267 containerd[1637]: 2026-01-22 01:04:21.430 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5fa7f8c6-6405-4444-b48f-a91387b277e9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2nn4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9614124c08a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.504267 containerd[1637]: 2026-01-22 01:04:21.430 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" Jan 22 01:04:21.504267 containerd[1637]: 2026-01-22 01:04:21.430 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9614124c08a ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" Jan 22 01:04:21.504267 containerd[1637]: 2026-01-22 01:04:21.461 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" Jan 22 01:04:21.504267 containerd[1637]: 2026-01-22 01:04:21.462 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5fa7f8c6-6405-4444-b48f-a91387b277e9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d", Pod:"coredns-674b8bbfcf-2nn4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9614124c08a", MAC:"da:57:28:71:a8:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.504267 containerd[1637]: 2026-01-22 01:04:21.478 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2nn4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2nn4x-eth0" Jan 22 01:04:21.532754 containerd[1637]: time="2026-01-22T01:04:21.531821274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rbgbh,Uid:52c71295-c428-49b6-883f-725a15bb85e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639\"" Jan 22 01:04:21.542808 kubelet[2860]: E0122 01:04:21.542102 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:21.566786 systemd[1]: Started cri-containerd-7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866.scope - libcontainer container 7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866. Jan 22 01:04:21.592070 containerd[1637]: time="2026-01-22T01:04:21.591322492Z" level=info msg="CreateContainer within sandbox \"ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 01:04:21.604987 containerd[1637]: time="2026-01-22T01:04:21.604857120Z" level=info msg="connecting to shim 8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d" address="unix:///run/containerd/s/72adb6a1df7b0029d21486ea6677aecd5bde6bcacf5ce7cdc0e379388a3dc292" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:21.632000 audit: BPF prog-id=231 op=LOAD Jan 22 01:04:21.633000 audit: BPF prog-id=232 op=LOAD Jan 22 01:04:21.633000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=4785 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730343862323238326331336264633163343764666236623761363434 Jan 22 01:04:21.633000 audit: BPF prog-id=232 op=UNLOAD Jan 22 01:04:21.633000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730343862323238326331336264633163343764666236623761363434 Jan 22 01:04:21.634000 audit: BPF prog-id=233 op=LOAD Jan 22 01:04:21.634000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=4785 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730343862323238326331336264633163343764666236623761363434 Jan 22 01:04:21.634000 audit: BPF prog-id=234 op=LOAD Jan 22 01:04:21.634000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=4785 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730343862323238326331336264633163343764666236623761363434 Jan 22 01:04:21.635000 audit: BPF prog-id=234 op=UNLOAD Jan 22 01:04:21.635000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730343862323238326331336264633163343764666236623761363434 Jan 22 01:04:21.635000 audit: BPF prog-id=233 op=UNLOAD Jan 22 01:04:21.635000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730343862323238326331336264633163343764666236623761363434 Jan 22 01:04:21.635000 audit: BPF prog-id=235 op=LOAD Jan 22 01:04:21.635000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=4785 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730343862323238326331336264633163343764666236623761363434 Jan 22 01:04:21.640603 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:21.649485 systemd-networkd[1547]: cali943e4b57026: Link UP Jan 22 01:04:21.650297 systemd-networkd[1547]: cali943e4b57026: Gained carrier Jan 22 01:04:21.666084 containerd[1637]: time="2026-01-22T01:04:21.665629785Z" level=info msg="Container 9c17e0b5831eff0e2cad0eb1315c508b159ad1bd74112fac3c1d8d6c0788a07a: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:04:21.681966 systemd-networkd[1547]: calie7964d5d41d: Gained IPv6LL Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:20.866 [INFO][4628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0 calico-kube-controllers-d659955cd- calico-system 410a0576-5e2f-4491-9946-abeea17a07fc 859 0 2026-01-22 01:03:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d659955cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d659955cd-4lv6v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali943e4b57026 [] [] }} ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:20.868 [INFO][4628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.008 [INFO][4685] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" HandleID="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Workload="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.008 [INFO][4685] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" HandleID="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Workload="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138b90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d659955cd-4lv6v", "timestamp":"2026-01-22 01:04:21.008305087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.009 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.403 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.403 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.431 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.470 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.492 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.504 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.515 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.521 [INFO][4685] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.529 [INFO][4685] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386 Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.549 [INFO][4685] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.591 [INFO][4685] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.592 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" host="localhost" Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.593 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 01:04:21.689515 containerd[1637]: 2026-01-22 01:04:21.593 [INFO][4685] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" HandleID="k8s-pod-network.a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Workload="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" Jan 22 01:04:21.691557 containerd[1637]: 2026-01-22 01:04:21.616 [INFO][4628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0", GenerateName:"calico-kube-controllers-d659955cd-", Namespace:"calico-system", SelfLink:"", UID:"410a0576-5e2f-4491-9946-abeea17a07fc", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d659955cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d659955cd-4lv6v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali943e4b57026", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.691557 containerd[1637]: 2026-01-22 01:04:21.616 [INFO][4628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" Jan 22 01:04:21.691557 containerd[1637]: 2026-01-22 01:04:21.620 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali943e4b57026 ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" Jan 22 01:04:21.691557 containerd[1637]: 2026-01-22 01:04:21.653 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" Jan 22 01:04:21.691557 containerd[1637]: 2026-01-22 01:04:21.655 [INFO][4628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0", GenerateName:"calico-kube-controllers-d659955cd-", Namespace:"calico-system", SelfLink:"", UID:"410a0576-5e2f-4491-9946-abeea17a07fc", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 1, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d659955cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386", Pod:"calico-kube-controllers-d659955cd-4lv6v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali943e4b57026", MAC:"fa:2b:6c:07:0f:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 01:04:21.691557 containerd[1637]: 2026-01-22 01:04:21.683 [INFO][4628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" Namespace="calico-system" Pod="calico-kube-controllers-d659955cd-4lv6v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d659955cd--4lv6v-eth0" Jan 22 01:04:21.700308 containerd[1637]: time="2026-01-22T01:04:21.700175483Z" level=info msg="CreateContainer within sandbox \"ef03790ce81bb2a9600f9ae19518eaa975c85f46eeb39b6fbe18a25c22819639\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c17e0b5831eff0e2cad0eb1315c508b159ad1bd74112fac3c1d8d6c0788a07a\"" Jan 22 01:04:21.699000 audit[4864]: NETFILTER_CFG table=filter:138 family=2 entries=52 op=nft_register_chain pid=4864 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:21.699000 audit[4864]: SYSCALL arch=c000003e syscall=46 success=yes exit=23908 a0=3 a1=7ffe7f7edd40 a2=0 a3=7ffe7f7edd2c items=0 ppid=4110 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.699000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:21.705531 containerd[1637]: time="2026-01-22T01:04:21.703992794Z" level=info msg="StartContainer for \"9c17e0b5831eff0e2cad0eb1315c508b159ad1bd74112fac3c1d8d6c0788a07a\"" Jan 22 01:04:21.705531 containerd[1637]: time="2026-01-22T01:04:21.705305446Z" level=info msg="connecting to shim 9c17e0b5831eff0e2cad0eb1315c508b159ad1bd74112fac3c1d8d6c0788a07a" address="unix:///run/containerd/s/58395cecb507837463e4fa6b2a4fa458c944aa4c4e6466df94b7bfe9d47f1838" protocol=ttrpc version=3 Jan 22 01:04:21.722051 systemd[1]: Started cri-containerd-8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d.scope - libcontainer container 8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d. Jan 22 01:04:21.769550 containerd[1637]: time="2026-01-22T01:04:21.768782235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8grjm,Uid:3eaef106-75d7-42c8-a82e-57ae58f4f9cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"7048b2282c13bdc1c47dfb6b7a644495b3ba7aa3668ac18deb771e7e6a3c9866\"" Jan 22 01:04:21.774130 containerd[1637]: time="2026-01-22T01:04:21.773878902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 01:04:21.803869 systemd[1]: Started cri-containerd-9c17e0b5831eff0e2cad0eb1315c508b159ad1bd74112fac3c1d8d6c0788a07a.scope - libcontainer container 9c17e0b5831eff0e2cad0eb1315c508b159ad1bd74112fac3c1d8d6c0788a07a. Jan 22 01:04:21.809000 audit: BPF prog-id=236 op=LOAD Jan 22 01:04:21.819189 containerd[1637]: time="2026-01-22T01:04:21.818977114Z" level=info msg="connecting to shim a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386" address="unix:///run/containerd/s/5f579244958666c129cbf1c32998663b25db354502571247b638984a93916dde" namespace=k8s.io protocol=ttrpc version=3 Jan 22 01:04:21.817000 audit[4907]: NETFILTER_CFG table=filter:139 family=2 entries=66 op=nft_register_chain pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 01:04:21.817000 audit[4907]: SYSCALL arch=c000003e syscall=46 success=yes exit=29556 a0=3 a1=7ffce91ae7e0 a2=0 a3=7ffce91ae7cc items=0 ppid=4110 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.817000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 01:04:21.819000 audit: BPF prog-id=237 op=LOAD Jan 22 01:04:21.819000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4835 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865643763393636336262323533366434323066613534346331386665 Jan 22 01:04:21.820000 audit: BPF prog-id=237 op=UNLOAD Jan 22 01:04:21.820000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4835 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865643763393636336262323533366434323066613534346331386665 Jan 22 01:04:21.821000 audit: BPF prog-id=238 op=LOAD Jan 22 01:04:21.821000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4835 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865643763393636336262323533366434323066613534346331386665 Jan 22 01:04:21.823000 audit: BPF prog-id=239 op=LOAD Jan 22 01:04:21.823000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4835 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865643763393636336262323533366434323066613534346331386665 Jan 22 01:04:21.823000 audit: BPF prog-id=239 op=UNLOAD Jan 22 01:04:21.823000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4835 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865643763393636336262323533366434323066613534346331386665 Jan 22 01:04:21.823000 audit: BPF prog-id=238 op=UNLOAD Jan 22 01:04:21.823000 audit[4853]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4835 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865643763393636336262323533366434323066613534346331386665 Jan 22 01:04:21.823000 audit: BPF prog-id=240 op=LOAD Jan 22 01:04:21.823000 audit[4853]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4835 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865643763393636336262323533366434323066613534346331386665 Jan 22 01:04:21.827532 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:21.837098 containerd[1637]: time="2026-01-22T01:04:21.836718099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:21.840291 containerd[1637]: time="2026-01-22T01:04:21.840168991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 01:04:21.842189 containerd[1637]: time="2026-01-22T01:04:21.840296688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:21.842236 kubelet[2860]: E0122 01:04:21.840718 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:04:21.842236 kubelet[2860]: E0122 01:04:21.840778 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:04:21.842236 kubelet[2860]: E0122 01:04:21.841021 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:21.853835 containerd[1637]: time="2026-01-22T01:04:21.853041729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 01:04:21.877000 audit: BPF prog-id=241 op=LOAD Jan 22 01:04:21.880000 audit: BPF prog-id=242 op=LOAD Jan 22 01:04:21.880000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4730 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963313765306235383331656666306532636164306562313331356335 Jan 22 01:04:21.881000 audit: BPF prog-id=242 op=UNLOAD Jan 22 01:04:21.881000 audit[4882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4730 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963313765306235383331656666306532636164306562313331356335 Jan 22 01:04:21.882000 audit: BPF prog-id=243 op=LOAD Jan 22 01:04:21.882000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4730 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963313765306235383331656666306532636164306562313331356335 Jan 22 01:04:21.883000 audit: BPF prog-id=244 op=LOAD Jan 22 01:04:21.883000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4730 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963313765306235383331656666306532636164306562313331356335 Jan 22 01:04:21.883000 audit: BPF prog-id=244 op=UNLOAD Jan 22 01:04:21.883000 audit[4882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4730 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963313765306235383331656666306532636164306562313331356335 Jan 22 01:04:21.883000 audit: BPF prog-id=243 op=UNLOAD Jan 22 01:04:21.883000 audit[4882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4730 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963313765306235383331656666306532636164306562313331356335 Jan 22 01:04:21.883000 audit: BPF prog-id=245 op=LOAD Jan 22 01:04:21.883000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4730 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963313765306235383331656666306532636164306562313331356335 Jan 22 01:04:21.917758 systemd[1]: Started cri-containerd-a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386.scope - libcontainer container a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386. Jan 22 01:04:21.930338 containerd[1637]: time="2026-01-22T01:04:21.930293779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:21.933761 containerd[1637]: time="2026-01-22T01:04:21.933708169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 01:04:21.934077 containerd[1637]: time="2026-01-22T01:04:21.933832173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:21.934570 kubelet[2860]: E0122 01:04:21.934501 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:04:21.934644 kubelet[2860]: E0122 01:04:21.934584 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:04:21.934782 kubelet[2860]: E0122 01:04:21.934700 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:21.936271 kubelet[2860]: E0122 01:04:21.936230 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:21.946668 containerd[1637]: time="2026-01-22T01:04:21.946123461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nn4x,Uid:5fa7f8c6-6405-4444-b48f-a91387b277e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d\"" Jan 22 01:04:21.950488 kubelet[2860]: E0122 01:04:21.950343 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:21.964286 containerd[1637]: time="2026-01-22T01:04:21.962849023Z" level=info msg="CreateContainer within sandbox \"8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 01:04:21.972000 audit: BPF prog-id=246 op=LOAD Jan 22 01:04:21.973000 audit: BPF prog-id=247 op=LOAD Jan 22 01:04:21.973000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b8238 a2=98 a3=0 items=0 ppid=4914 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613334326661326430306539663763666364623636376363636561 Jan 22 01:04:21.973000 audit: BPF prog-id=247 op=UNLOAD Jan 22 01:04:21.973000 audit[4932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4914 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613334326661326430306539663763666364623636376363636561 Jan 22 01:04:21.974000 audit: BPF prog-id=248 op=LOAD Jan 22 01:04:21.974000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b8488 a2=98 a3=0 items=0 ppid=4914 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613334326661326430306539663763666364623636376363636561 Jan 22 01:04:21.974000 audit: BPF prog-id=249 op=LOAD Jan 22 01:04:21.974000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b8218 a2=98 a3=0 items=0 ppid=4914 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613334326661326430306539663763666364623636376363636561 Jan 22 01:04:21.975000 audit: BPF prog-id=249 op=UNLOAD Jan 22 01:04:21.975000 audit[4932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4914 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613334326661326430306539663763666364623636376363636561 Jan 22 01:04:21.975000 audit: BPF prog-id=248 op=UNLOAD Jan 22 01:04:21.975000 audit[4932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4914 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613334326661326430306539663763666364623636376363636561 Jan 22 01:04:21.976000 audit: BPF prog-id=250 op=LOAD Jan 22 01:04:21.976000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b86e8 a2=98 a3=0 items=0 ppid=4914 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:21.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613334326661326430306539663763666364623636376363636561 Jan 22 01:04:21.982648 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 01:04:22.002359 containerd[1637]: time="2026-01-22T01:04:22.002062320Z" level=info msg="Container 17e444edb8f8ef01ba8a0485ce6b817ccf33d638ef3b58f449dcca9065990b19: CDI devices from CRI Config.CDIDevices: []" Jan 22 01:04:22.008287 containerd[1637]: time="2026-01-22T01:04:22.008123920Z" level=info msg="StartContainer for \"9c17e0b5831eff0e2cad0eb1315c508b159ad1bd74112fac3c1d8d6c0788a07a\" returns successfully" Jan 22 01:04:22.030544 containerd[1637]: time="2026-01-22T01:04:22.029625676Z" level=info msg="CreateContainer within sandbox \"8ed7c9663bb2536d420fa544c18fe35a3a5b2af771d4e0f243b9687da64d6f4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17e444edb8f8ef01ba8a0485ce6b817ccf33d638ef3b58f449dcca9065990b19\"" Jan 22 01:04:22.037090 containerd[1637]: time="2026-01-22T01:04:22.037053146Z" level=info msg="StartContainer for \"17e444edb8f8ef01ba8a0485ce6b817ccf33d638ef3b58f449dcca9065990b19\"" Jan 22 01:04:22.041202 containerd[1637]: time="2026-01-22T01:04:22.041161597Z" level=info msg="connecting to shim 17e444edb8f8ef01ba8a0485ce6b817ccf33d638ef3b58f449dcca9065990b19" address="unix:///run/containerd/s/72adb6a1df7b0029d21486ea6677aecd5bde6bcacf5ce7cdc0e379388a3dc292" protocol=ttrpc version=3 Jan 22 01:04:22.126690 systemd[1]: Started cri-containerd-17e444edb8f8ef01ba8a0485ce6b817ccf33d638ef3b58f449dcca9065990b19.scope - libcontainer container 17e444edb8f8ef01ba8a0485ce6b817ccf33d638ef3b58f449dcca9065990b19. Jan 22 01:04:22.150274 containerd[1637]: time="2026-01-22T01:04:22.150084323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d659955cd-4lv6v,Uid:410a0576-5e2f-4491-9946-abeea17a07fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4a342fa2d00e9f7cfcdb667ccceacaff830e225856fed7db61aec94d9afe386\"" Jan 22 01:04:22.159671 containerd[1637]: time="2026-01-22T01:04:22.159531044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 01:04:22.217000 audit: BPF prog-id=251 op=LOAD Jan 22 01:04:22.225000 audit: BPF prog-id=252 op=LOAD Jan 22 01:04:22.225000 audit[4969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=4835 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653434346564623866386566303162613861303438356365366238 Jan 22 01:04:22.228000 audit: BPF prog-id=252 op=UNLOAD Jan 22 01:04:22.228000 audit[4969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4835 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653434346564623866386566303162613861303438356365366238 Jan 22 01:04:22.229000 audit: BPF prog-id=253 op=LOAD Jan 22 01:04:22.229000 audit[4969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=4835 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653434346564623866386566303162613861303438356365366238 Jan 22 01:04:22.232303 kubelet[2860]: E0122 01:04:22.232281 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:22.230000 audit: BPF prog-id=254 op=LOAD Jan 22 01:04:22.234102 containerd[1637]: time="2026-01-22T01:04:22.232571026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:22.230000 audit[4969]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=4835 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653434346564623866386566303162613861303438356365366238 Jan 22 01:04:22.235000 audit: BPF prog-id=254 op=UNLOAD Jan 22 01:04:22.235000 audit[4969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4835 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653434346564623866386566303162613861303438356365366238 Jan 22 01:04:22.240000 audit: BPF prog-id=253 op=UNLOAD Jan 22 01:04:22.240000 audit[4969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4835 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653434346564623866386566303162613861303438356365366238 Jan 22 01:04:22.242000 audit: BPF prog-id=255 op=LOAD Jan 22 01:04:22.242000 audit[4969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=4835 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653434346564623866386566303162613861303438356365366238 Jan 22 01:04:22.249070 containerd[1637]: time="2026-01-22T01:04:22.248967465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:22.250237 containerd[1637]: time="2026-01-22T01:04:22.250129017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 01:04:22.253007 kubelet[2860]: E0122 01:04:22.252835 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:04:22.253007 kubelet[2860]: E0122 01:04:22.252964 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:04:22.253128 kubelet[2860]: E0122 01:04:22.253066 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pdb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d659955cd-4lv6v_calico-system(410a0576-5e2f-4491-9946-abeea17a07fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:22.254201 kubelet[2860]: E0122 01:04:22.253466 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:04:22.255615 kubelet[2860]: E0122 01:04:22.255351 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:04:22.270099 kubelet[2860]: I0122 01:04:22.269973 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rbgbh" podStartSLOduration=47.269957473 podStartE2EDuration="47.269957473s" podCreationTimestamp="2026-01-22 01:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 01:04:22.264533274 +0000 UTC m=+51.921726473" watchObservedRunningTime="2026-01-22 01:04:22.269957473 +0000 UTC m=+51.927150652" Jan 22 01:04:22.273484 kubelet[2860]: E0122 01:04:22.271522 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:04:22.278640 kubelet[2860]: E0122 01:04:22.278582 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:22.347992 containerd[1637]: time="2026-01-22T01:04:22.347719986Z" level=info msg="StartContainer for \"17e444edb8f8ef01ba8a0485ce6b817ccf33d638ef3b58f449dcca9065990b19\" returns successfully" Jan 22 01:04:22.393000 audit[5013]: NETFILTER_CFG table=filter:140 family=2 entries=20 op=nft_register_rule pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:22.393000 audit[5013]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd7535620 a2=0 a3=7ffdd753560c items=0 ppid=3028 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:22.418000 audit[5013]: NETFILTER_CFG table=nat:141 family=2 entries=14 op=nft_register_rule pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:22.418000 audit[5013]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd7535620 a2=0 a3=0 items=0 ppid=3028 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:22.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:22.515356 systemd-networkd[1547]: cali4c652d8c939: Gained IPv6LL Jan 22 01:04:22.961820 systemd-networkd[1547]: cali9614124c08a: Gained IPv6LL Jan 22 01:04:23.219974 systemd-networkd[1547]: califbd15c850c4: Gained IPv6LL Jan 22 01:04:23.247853 kubelet[2860]: E0122 01:04:23.246720 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:23.248666 kubelet[2860]: E0122 01:04:23.248010 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:04:23.250196 kubelet[2860]: E0122 01:04:23.250074 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:23.254970 kubelet[2860]: E0122 01:04:23.254477 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:23.315209 kubelet[2860]: I0122 01:04:23.314773 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2nn4x" podStartSLOduration=48.314752993 podStartE2EDuration="48.314752993s" podCreationTimestamp="2026-01-22 01:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 01:04:23.314021377 +0000 UTC m=+52.971214717" watchObservedRunningTime="2026-01-22 01:04:23.314752993 +0000 UTC m=+52.971946172" Jan 22 01:04:23.375000 audit[5019]: NETFILTER_CFG table=filter:142 family=2 entries=17 op=nft_register_rule pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:23.381275 kernel: kauditd_printk_skb: 233 callbacks suppressed Jan 22 01:04:23.383142 kernel: audit: type=1325 audit(1769043863.375:732): table=filter:142 family=2 entries=17 op=nft_register_rule pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:23.375000 audit[5019]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff8e42f820 a2=0 a3=7fff8e42f80c items=0 ppid=3028 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:23.419512 kernel: audit: type=1300 audit(1769043863.375:732): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff8e42f820 a2=0 a3=7fff8e42f80c items=0 ppid=3028 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:23.419599 kernel: audit: type=1327 audit(1769043863.375:732): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:23.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:23.478000 audit[5019]: NETFILTER_CFG table=nat:143 family=2 entries=35 op=nft_register_chain pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:23.478000 audit[5019]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff8e42f820 a2=0 a3=7fff8e42f80c items=0 ppid=3028 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:23.511274 kernel: audit: type=1325 audit(1769043863.478:733): table=nat:143 family=2 entries=35 op=nft_register_chain pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:23.511346 kernel: audit: type=1300 audit(1769043863.478:733): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff8e42f820 a2=0 a3=7fff8e42f80c items=0 ppid=3028 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:23.511481 kernel: audit: type=1327 audit(1769043863.478:733): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:23.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:23.601815 systemd-networkd[1547]: cali943e4b57026: Gained IPv6LL Jan 22 01:04:24.249689 kubelet[2860]: E0122 01:04:24.249304 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:24.250313 kubelet[2860]: E0122 01:04:24.250169 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:24.554000 audit[5021]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:24.554000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4020b9f0 a2=0 a3=7ffc4020b9dc items=0 ppid=3028 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:24.589236 kernel: audit: type=1325 audit(1769043864.554:734): table=filter:144 family=2 entries=14 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:24.589320 kernel: audit: type=1300 audit(1769043864.554:734): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4020b9f0 a2=0 a3=7ffc4020b9dc items=0 ppid=3028 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:24.603512 kernel: audit: type=1327 audit(1769043864.554:734): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:24.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:24.608000 audit[5021]: NETFILTER_CFG table=nat:145 family=2 entries=56 op=nft_register_chain pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:24.608000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc4020b9f0 a2=0 a3=7ffc4020b9dc items=0 ppid=3028 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:24.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:04:24.625553 kernel: audit: type=1325 audit(1769043864.608:735): table=nat:145 family=2 entries=56 op=nft_register_chain pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:04:25.251521 kubelet[2860]: E0122 01:04:25.251303 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:28.722728 containerd[1637]: time="2026-01-22T01:04:28.722197939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 01:04:28.786805 containerd[1637]: time="2026-01-22T01:04:28.786491720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:28.788573 containerd[1637]: time="2026-01-22T01:04:28.788213615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 01:04:28.788573 containerd[1637]: time="2026-01-22T01:04:28.788329188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:28.788841 kubelet[2860]: E0122 01:04:28.788689 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:04:28.788841 kubelet[2860]: E0122 01:04:28.788844 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:04:28.789800 kubelet[2860]: E0122 01:04:28.789094 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be94bbf4992b4f04a6ddb3aecaced976,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:28.792742 containerd[1637]: time="2026-01-22T01:04:28.792645940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 01:04:28.858294 containerd[1637]: time="2026-01-22T01:04:28.857857625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:28.860175 containerd[1637]: time="2026-01-22T01:04:28.859792667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 01:04:28.860175 containerd[1637]: time="2026-01-22T01:04:28.859983771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:28.860355 kubelet[2860]: E0122 01:04:28.860148 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:04:28.860355 kubelet[2860]: E0122 01:04:28.860201 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:04:28.860590 kubelet[2860]: E0122 01:04:28.860490 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:28.862551 kubelet[2860]: E0122 01:04:28.862234 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:04:33.722538 containerd[1637]: time="2026-01-22T01:04:33.722276649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 01:04:33.795128 containerd[1637]: time="2026-01-22T01:04:33.794718303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:33.796981 containerd[1637]: time="2026-01-22T01:04:33.796668428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 01:04:33.796981 containerd[1637]: time="2026-01-22T01:04:33.796767850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:33.797204 kubelet[2860]: E0122 01:04:33.797160 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:04:33.799640 kubelet[2860]: E0122 01:04:33.797213 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:04:33.799640 kubelet[2860]: E0122 01:04:33.797544 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:33.800287 containerd[1637]: time="2026-01-22T01:04:33.797690956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 01:04:33.863356 containerd[1637]: time="2026-01-22T01:04:33.863161214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:33.866230 containerd[1637]: time="2026-01-22T01:04:33.866066205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 01:04:33.866325 containerd[1637]: time="2026-01-22T01:04:33.866137177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:33.866957 kubelet[2860]: E0122 01:04:33.866728 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:04:33.866957 kubelet[2860]: E0122 01:04:33.866847 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:04:33.867341 kubelet[2860]: E0122 01:04:33.867156 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gwr5j_calico-system(03592d59-bdcf-436c-be98-9c688e9b6f7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:33.869313 containerd[1637]: time="2026-01-22T01:04:33.868959474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 01:04:33.869731 kubelet[2860]: E0122 01:04:33.869148 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:04:33.939567 containerd[1637]: time="2026-01-22T01:04:33.939288412Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:33.945453 containerd[1637]: time="2026-01-22T01:04:33.945160083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 01:04:33.945917 containerd[1637]: time="2026-01-22T01:04:33.945515695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:33.946011 kubelet[2860]: E0122 01:04:33.945681 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:04:33.946011 kubelet[2860]: E0122 01:04:33.945745 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:04:33.946147 kubelet[2860]: E0122 01:04:33.946098 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:33.948337 kubelet[2860]: E0122 01:04:33.947714 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:34.722712 containerd[1637]: time="2026-01-22T01:04:34.722497386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 01:04:34.791160 containerd[1637]: time="2026-01-22T01:04:34.790276214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:34.797470 containerd[1637]: time="2026-01-22T01:04:34.797162584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 01:04:34.797470 containerd[1637]: time="2026-01-22T01:04:34.797269042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:34.798154 kubelet[2860]: E0122 01:04:34.797799 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:34.798154 kubelet[2860]: E0122 01:04:34.797936 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:34.798993 kubelet[2860]: E0122 01:04:34.798324 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw4gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7968ffc4b-xxfmg_calico-apiserver(08120695-b0cc-4d57-901b-351d05e2677f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:34.800017 containerd[1637]: time="2026-01-22T01:04:34.799777588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 01:04:34.800128 kubelet[2860]: E0122 01:04:34.799929 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:04:34.871539 containerd[1637]: time="2026-01-22T01:04:34.871217882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:34.875516 containerd[1637]: time="2026-01-22T01:04:34.875306765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 01:04:34.876259 containerd[1637]: time="2026-01-22T01:04:34.875572857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:34.876992 kubelet[2860]: E0122 01:04:34.876595 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:04:34.876992 kubelet[2860]: E0122 01:04:34.876697 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:04:34.877781 kubelet[2860]: E0122 01:04:34.877142 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pdb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d659955cd-4lv6v_calico-system(410a0576-5e2f-4491-9946-abeea17a07fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:34.880086 kubelet[2860]: E0122 01:04:34.879633 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:04:35.722265 containerd[1637]: time="2026-01-22T01:04:35.722216001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 01:04:35.787348 containerd[1637]: time="2026-01-22T01:04:35.786972372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:35.789424 containerd[1637]: time="2026-01-22T01:04:35.789222218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 01:04:35.789424 containerd[1637]: time="2026-01-22T01:04:35.789331541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:35.789776 kubelet[2860]: E0122 01:04:35.789739 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:35.789959 kubelet[2860]: E0122 01:04:35.789938 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:35.790296 kubelet[2860]: E0122 01:04:35.790117 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g2d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7968ffc4b-nnclr_calico-apiserver(3572850e-6c2e-4ed9-a568-75ca88ead4a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:35.791795 kubelet[2860]: E0122 01:04:35.791510 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:04:41.725173 kubelet[2860]: E0122 01:04:41.725015 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:04:43.721562 kubelet[2860]: E0122 01:04:43.720925 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:44.729525 kubelet[2860]: E0122 01:04:44.728947 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:04:45.360711 kubelet[2860]: E0122 01:04:45.354686 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:45.720695 kubelet[2860]: E0122 01:04:45.720579 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:45.722766 kubelet[2860]: E0122 01:04:45.722526 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:04:46.726594 kubelet[2860]: E0122 01:04:46.726322 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:04:46.731262 kubelet[2860]: E0122 01:04:46.729286 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:04:46.731262 kubelet[2860]: E0122 01:04:46.730057 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:48.720700 kubelet[2860]: E0122 01:04:48.720118 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:53.722459 containerd[1637]: time="2026-01-22T01:04:53.722295480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 01:04:53.792111 containerd[1637]: time="2026-01-22T01:04:53.791833026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:53.793813 containerd[1637]: time="2026-01-22T01:04:53.793668814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 01:04:53.793813 containerd[1637]: time="2026-01-22T01:04:53.793708646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:53.794557 kubelet[2860]: E0122 01:04:53.794127 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:04:53.794557 kubelet[2860]: E0122 01:04:53.794189 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:04:53.794557 kubelet[2860]: E0122 01:04:53.794350 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be94bbf4992b4f04a6ddb3aecaced976,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:53.798360 containerd[1637]: time="2026-01-22T01:04:53.798250239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 01:04:53.860221 containerd[1637]: time="2026-01-22T01:04:53.860091064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:53.861790 containerd[1637]: time="2026-01-22T01:04:53.861651652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 01:04:53.861790 containerd[1637]: time="2026-01-22T01:04:53.861753612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:53.862206 kubelet[2860]: E0122 01:04:53.862127 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:04:53.862206 kubelet[2860]: E0122 01:04:53.862201 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:04:53.862911 kubelet[2860]: E0122 01:04:53.862306 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:53.863597 kubelet[2860]: E0122 01:04:53.863454 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:04:55.720465 kubelet[2860]: E0122 01:04:55.720283 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:04:56.723238 containerd[1637]: time="2026-01-22T01:04:56.723024395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 01:04:56.802761 containerd[1637]: time="2026-01-22T01:04:56.802553231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:56.803847 containerd[1637]: time="2026-01-22T01:04:56.803815157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 01:04:56.804580 containerd[1637]: time="2026-01-22T01:04:56.804009900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:56.805049 kubelet[2860]: E0122 01:04:56.804975 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:04:56.805470 kubelet[2860]: E0122 01:04:56.805066 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:04:56.805470 kubelet[2860]: E0122 01:04:56.805216 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pdb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d659955cd-4lv6v_calico-system(410a0576-5e2f-4491-9946-abeea17a07fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:56.806811 kubelet[2860]: E0122 01:04:56.806746 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:04:57.721549 containerd[1637]: time="2026-01-22T01:04:57.721358885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 01:04:57.799263 containerd[1637]: time="2026-01-22T01:04:57.799126435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:57.800529 containerd[1637]: time="2026-01-22T01:04:57.800481294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 01:04:57.800901 containerd[1637]: time="2026-01-22T01:04:57.800640961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:57.800976 kubelet[2860]: E0122 01:04:57.800913 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:04:57.800976 kubelet[2860]: E0122 01:04:57.800960 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:04:57.801158 kubelet[2860]: E0122 01:04:57.801089 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:57.804613 containerd[1637]: time="2026-01-22T01:04:57.804468261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 01:04:57.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.144:22-10.0.0.1:47862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:04:57.857146 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:47862.service - OpenSSH per-connection server daemon (10.0.0.1:47862). Jan 22 01:04:57.860308 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 22 01:04:57.860556 kernel: audit: type=1130 audit(1769043897.855:736): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.144:22-10.0.0.1:47862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:04:57.868402 containerd[1637]: time="2026-01-22T01:04:57.868216755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:57.870020 containerd[1637]: time="2026-01-22T01:04:57.869830827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 01:04:57.870020 containerd[1637]: time="2026-01-22T01:04:57.869937795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:57.870232 kubelet[2860]: E0122 01:04:57.870189 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:04:57.871343 kubelet[2860]: E0122 01:04:57.870638 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:04:57.871343 kubelet[2860]: E0122 01:04:57.870949 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:57.872459 kubelet[2860]: E0122 01:04:57.872257 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:04:57.968000 audit[5084]: USER_ACCT pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:57.969839 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 47862 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:04:57.978000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:57.981296 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:04:57.990033 systemd-logind[1609]: New session 8 of user core. Jan 22 01:04:57.992631 kernel: audit: type=1101 audit(1769043897.968:737): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:57.992701 kernel: audit: type=1103 audit(1769043897.978:738): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:57.998763 kernel: audit: type=1006 audit(1769043897.978:739): pid=5084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 22 01:04:57.978000 audit[5084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc700ad490 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:58.008346 kernel: audit: type=1300 audit(1769043897.978:739): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc700ad490 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:04:58.008557 kernel: audit: type=1327 audit(1769043897.978:739): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:04:57.978000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:04:58.017712 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 22 01:04:58.021000 audit[5084]: USER_START pid=5084 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.043478 kernel: audit: type=1105 audit(1769043898.021:740): pid=5084 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.043572 kernel: audit: type=1103 audit(1769043898.021:741): pid=5088 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.021000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.170034 sshd[5088]: Connection closed by 10.0.0.1 port 47862 Jan 22 01:04:58.170470 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Jan 22 01:04:58.171000 audit[5084]: USER_END pid=5084 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.176500 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Jan 22 01:04:58.176847 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:47862.service: Deactivated successfully. Jan 22 01:04:58.179990 systemd[1]: session-8.scope: Deactivated successfully. Jan 22 01:04:58.183726 systemd-logind[1609]: Removed session 8. Jan 22 01:04:58.192584 kernel: audit: type=1106 audit(1769043898.171:742): pid=5084 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.192630 kernel: audit: type=1104 audit(1769043898.171:743): pid=5084 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.171000 audit[5084]: CRED_DISP pid=5084 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:04:58.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.144:22-10.0.0.1:47862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:04:59.722306 containerd[1637]: time="2026-01-22T01:04:59.722211227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 01:04:59.799429 containerd[1637]: time="2026-01-22T01:04:59.799238639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:59.801275 containerd[1637]: time="2026-01-22T01:04:59.801245853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 01:04:59.801927 containerd[1637]: time="2026-01-22T01:04:59.801510692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:59.802454 kubelet[2860]: E0122 01:04:59.802291 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:04:59.803593 kubelet[2860]: E0122 01:04:59.802462 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:04:59.804631 kubelet[2860]: E0122 01:04:59.804547 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gwr5j_calico-system(03592d59-bdcf-436c-be98-9c688e9b6f7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:59.806611 containerd[1637]: time="2026-01-22T01:04:59.806239327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 01:04:59.806760 kubelet[2860]: E0122 01:04:59.806501 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:04:59.886443 containerd[1637]: time="2026-01-22T01:04:59.886262196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:59.887980 containerd[1637]: time="2026-01-22T01:04:59.887839219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 01:04:59.888058 containerd[1637]: time="2026-01-22T01:04:59.887976280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:59.888321 kubelet[2860]: E0122 01:04:59.888161 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:59.888321 kubelet[2860]: E0122 01:04:59.888225 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:59.889819 kubelet[2860]: E0122 01:04:59.888593 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g2d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7968ffc4b-nnclr_calico-apiserver(3572850e-6c2e-4ed9-a568-75ca88ead4a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:59.889973 containerd[1637]: time="2026-01-22T01:04:59.889094644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 01:04:59.890210 kubelet[2860]: E0122 01:04:59.890088 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:04:59.951997 containerd[1637]: time="2026-01-22T01:04:59.951920199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:04:59.953766 containerd[1637]: time="2026-01-22T01:04:59.953594498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 01:04:59.953766 containerd[1637]: time="2026-01-22T01:04:59.953629183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 01:04:59.953956 kubelet[2860]: E0122 01:04:59.953859 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:59.953956 kubelet[2860]: E0122 01:04:59.953945 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:04:59.954202 kubelet[2860]: E0122 01:04:59.954064 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw4gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7968ffc4b-xxfmg_calico-apiserver(08120695-b0cc-4d57-901b-351d05e2677f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 01:04:59.955734 kubelet[2860]: E0122 01:04:59.955646 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:05:03.196175 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:47876.service - OpenSSH per-connection server daemon (10.0.0.1:47876). Jan 22 01:05:03.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.144:22-10.0.0.1:47876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:03.200347 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 01:05:03.200506 kernel: audit: type=1130 audit(1769043903.195:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.144:22-10.0.0.1:47876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:03.341000 audit[5104]: USER_ACCT pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.345689 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 47876 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:03.349816 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:03.347000 audit[5104]: CRED_ACQ pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.368486 systemd-logind[1609]: New session 9 of user core. Jan 22 01:05:03.372196 kernel: audit: type=1101 audit(1769043903.341:746): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.372360 kernel: audit: type=1103 audit(1769043903.347:747): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.372484 kernel: audit: type=1006 audit(1769043903.347:748): pid=5104 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 22 01:05:03.381516 kernel: audit: type=1300 audit(1769043903.347:748): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc151b5ba0 a2=3 a3=0 items=0 ppid=1 pid=5104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:03.347000 audit[5104]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc151b5ba0 a2=3 a3=0 items=0 ppid=1 pid=5104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:03.398449 kernel: audit: type=1327 audit(1769043903.347:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:03.347000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:03.402169 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 22 01:05:03.406000 audit[5104]: USER_START pid=5104 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.420474 kernel: audit: type=1105 audit(1769043903.406:749): pid=5104 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.421000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.432465 kernel: audit: type=1103 audit(1769043903.421:750): pid=5107 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.581864 sshd[5107]: Connection closed by 10.0.0.1 port 47876 Jan 22 01:05:03.582662 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:03.583000 audit[5104]: USER_END pid=5104 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.597455 kernel: audit: type=1106 audit(1769043903.583:751): pid=5104 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.597717 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:47876.service: Deactivated successfully. Jan 22 01:05:03.600302 systemd[1]: session-9.scope: Deactivated successfully. Jan 22 01:05:03.583000 audit[5104]: CRED_DISP pid=5104 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:03.604330 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Jan 22 01:05:03.606347 systemd-logind[1609]: Removed session 9. Jan 22 01:05:03.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.144:22-10.0.0.1:47876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:03.614449 kernel: audit: type=1104 audit(1769043903.583:752): pid=5104 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:04.730441 kubelet[2860]: E0122 01:05:04.730108 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:05:08.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.144:22-10.0.0.1:55334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:08.606580 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:55334.service - OpenSSH per-connection server daemon (10.0.0.1:55334). Jan 22 01:05:08.612487 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 01:05:08.612625 kernel: audit: type=1130 audit(1769043908.605:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.144:22-10.0.0.1:55334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:08.690000 audit[5124]: USER_ACCT pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.695012 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:08.700168 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 55334 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:08.708312 systemd-logind[1609]: New session 10 of user core. Jan 22 01:05:08.692000 audit[5124]: CRED_ACQ pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.722412 kernel: audit: type=1101 audit(1769043908.690:755): pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.722515 kernel: audit: type=1103 audit(1769043908.692:756): pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.722604 kernel: audit: type=1006 audit(1769043908.692:757): pid=5124 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 22 01:05:08.692000 audit[5124]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6b9876c0 a2=3 a3=0 items=0 ppid=1 pid=5124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:08.739605 kernel: audit: type=1300 audit(1769043908.692:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6b9876c0 a2=3 a3=0 items=0 ppid=1 pid=5124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:08.739770 kernel: audit: type=1327 audit(1769043908.692:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:08.692000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:08.745838 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 22 01:05:08.749000 audit[5124]: USER_START pid=5124 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.763541 kernel: audit: type=1105 audit(1769043908.749:758): pid=5124 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.763635 kernel: audit: type=1103 audit(1769043908.752:759): pid=5127 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.752000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.884853 sshd[5127]: Connection closed by 10.0.0.1 port 55334 Jan 22 01:05:08.887026 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:08.888000 audit[5124]: USER_END pid=5124 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.910434 kernel: audit: type=1106 audit(1769043908.888:760): pid=5124 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.910569 kernel: audit: type=1104 audit(1769043908.888:761): pid=5124 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.888000 audit[5124]: CRED_DISP pid=5124 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.906610 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:55334.service: Deactivated successfully. Jan 22 01:05:08.910326 systemd[1]: session-10.scope: Deactivated successfully. Jan 22 01:05:08.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.144:22-10.0.0.1:55334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:08.911826 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Jan 22 01:05:08.916252 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:55338.service - OpenSSH per-connection server daemon (10.0.0.1:55338). Jan 22 01:05:08.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.144:22-10.0.0.1:55338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:08.918067 systemd-logind[1609]: Removed session 10. Jan 22 01:05:08.997000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.999222 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 55338 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:08.999000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:08.999000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff48113360 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:08.999000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:09.001474 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:09.009311 systemd-logind[1609]: New session 11 of user core. Jan 22 01:05:09.019703 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 22 01:05:09.022000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.026000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.228559 sshd[5145]: Connection closed by 10.0.0.1 port 55338 Jan 22 01:05:09.229804 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:09.235000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.235000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.144:22-10.0.0.1:55338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:09.245753 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:55338.service: Deactivated successfully. Jan 22 01:05:09.248355 systemd[1]: session-11.scope: Deactivated successfully. Jan 22 01:05:09.254513 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Jan 22 01:05:09.259016 systemd-logind[1609]: Removed session 11. Jan 22 01:05:09.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.144:22-10.0.0.1:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:09.267948 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:55346.service - OpenSSH per-connection server daemon (10.0.0.1:55346). Jan 22 01:05:09.341000 audit[5156]: USER_ACCT pid=5156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.343474 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 55346 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:09.344000 audit[5156]: CRED_ACQ pid=5156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.344000 audit[5156]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd86bbba0 a2=3 a3=0 items=0 ppid=1 pid=5156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:09.344000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:09.347281 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:09.357773 systemd-logind[1609]: New session 12 of user core. Jan 22 01:05:09.371585 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 22 01:05:09.379000 audit[5156]: USER_START pid=5156 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.383000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.500800 sshd[5159]: Connection closed by 10.0.0.1 port 55346 Jan 22 01:05:09.501188 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:09.502000 audit[5156]: USER_END pid=5156 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.502000 audit[5156]: CRED_DISP pid=5156 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:09.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.144:22-10.0.0.1:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:09.506929 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:55346.service: Deactivated successfully. Jan 22 01:05:09.509262 systemd[1]: session-12.scope: Deactivated successfully. Jan 22 01:05:09.511651 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Jan 22 01:05:09.513559 systemd-logind[1609]: Removed session 12. Jan 22 01:05:10.721779 kubelet[2860]: E0122 01:05:10.721580 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:05:11.695306 update_engine[1618]: I20260122 01:05:11.695104 1618 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 22 01:05:11.695306 update_engine[1618]: I20260122 01:05:11.695247 1618 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.697654 1618 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698261 1618 omaha_request_params.cc:62] Current group set to beta Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698474 1618 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698496 1618 update_attempter.cc:643] Scheduling an action processor start. Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698529 1618 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698594 1618 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698673 1618 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698684 1618 omaha_request_action.cc:272] Request: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: Jan 22 01:05:11.705069 update_engine[1618]: I20260122 01:05:11.698693 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 22 01:05:11.709707 update_engine[1618]: I20260122 01:05:11.708974 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 22 01:05:11.710646 update_engine[1618]: I20260122 01:05:11.710592 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 22 01:05:11.718969 locksmithd[1679]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 22 01:05:11.722200 kubelet[2860]: E0122 01:05:11.722168 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:05:11.727316 update_engine[1618]: E20260122 01:05:11.727102 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 22 01:05:11.727316 update_engine[1618]: I20260122 01:05:11.727221 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 22 01:05:12.722297 kubelet[2860]: E0122 01:05:12.722241 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:05:12.724758 kubelet[2860]: E0122 01:05:12.722944 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:05:13.721139 kubelet[2860]: E0122 01:05:13.721056 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:05:14.530658 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:44556.service - OpenSSH per-connection server daemon (10.0.0.1:44556). Jan 22 01:05:14.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.144:22-10.0.0.1:44556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:14.533188 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 22 01:05:14.533355 kernel: audit: type=1130 audit(1769043914.529:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.144:22-10.0.0.1:44556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:14.620000 audit[5176]: USER_ACCT pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.622284 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 44556 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:14.624868 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:14.622000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.637656 systemd-logind[1609]: New session 13 of user core. Jan 22 01:05:14.644091 kernel: audit: type=1101 audit(1769043914.620:782): pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.644194 kernel: audit: type=1103 audit(1769043914.622:783): pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.650393 kernel: audit: type=1006 audit(1769043914.622:784): pid=5176 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 22 01:05:14.622000 audit[5176]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4c205950 a2=3 a3=0 items=0 ppid=1 pid=5176 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:14.663792 kernel: audit: type=1300 audit(1769043914.622:784): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4c205950 a2=3 a3=0 items=0 ppid=1 pid=5176 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:14.668836 kernel: audit: type=1327 audit(1769043914.622:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:14.622000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:14.670825 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 22 01:05:14.677000 audit[5176]: USER_START pid=5176 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.681000 audit[5179]: CRED_ACQ pid=5179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.703165 kernel: audit: type=1105 audit(1769043914.677:785): pid=5176 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.703446 kernel: audit: type=1103 audit(1769043914.681:786): pid=5179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.823503 sshd[5179]: Connection closed by 10.0.0.1 port 44556 Jan 22 01:05:14.824057 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:14.839523 kernel: audit: type=1106 audit(1769043914.826:787): pid=5176 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.826000 audit[5176]: USER_END pid=5176 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.831492 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:44556.service: Deactivated successfully. Jan 22 01:05:14.834635 systemd[1]: session-13.scope: Deactivated successfully. Jan 22 01:05:14.836576 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Jan 22 01:05:14.838764 systemd-logind[1609]: Removed session 13. Jan 22 01:05:14.826000 audit[5176]: CRED_DISP pid=5176 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:14.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.144:22-10.0.0.1:44556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:14.854468 kernel: audit: type=1104 audit(1769043914.826:788): pid=5176 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:17.720489 kubelet[2860]: E0122 01:05:17.720351 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:05:18.722247 kubelet[2860]: E0122 01:05:18.722175 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:05:19.851046 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:44584.service - OpenSSH per-connection server daemon (10.0.0.1:44584). Jan 22 01:05:19.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.144:22-10.0.0.1:44584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:19.853512 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 01:05:19.853592 kernel: audit: type=1130 audit(1769043919.849:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.144:22-10.0.0.1:44584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:19.948000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:19.949777 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 44584 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:19.955103 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:19.963263 systemd-logind[1609]: New session 14 of user core. Jan 22 01:05:19.952000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:19.977873 kernel: audit: type=1101 audit(1769043919.948:791): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:19.978108 kernel: audit: type=1103 audit(1769043919.952:792): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:19.978443 kernel: audit: type=1006 audit(1769043919.952:793): pid=5219 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 22 01:05:19.952000 audit[5219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1dc39da0 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:19.952000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:19.999989 kernel: audit: type=1300 audit(1769043919.952:793): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1dc39da0 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:20.000063 kernel: audit: type=1327 audit(1769043919.952:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:20.000695 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 22 01:05:20.003000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.017445 kernel: audit: type=1105 audit(1769043920.003:794): pid=5219 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.003000 audit[5222]: CRED_ACQ pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.029484 kernel: audit: type=1103 audit(1769043920.003:795): pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.148791 sshd[5222]: Connection closed by 10.0.0.1 port 44584 Jan 22 01:05:20.149806 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:20.151000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.158097 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:44584.service: Deactivated successfully. Jan 22 01:05:20.161570 systemd[1]: session-14.scope: Deactivated successfully. Jan 22 01:05:20.162983 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Jan 22 01:05:20.151000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.166030 systemd-logind[1609]: Removed session 14. Jan 22 01:05:20.172774 kernel: audit: type=1106 audit(1769043920.151:796): pid=5219 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.172853 kernel: audit: type=1104 audit(1769043920.151:797): pid=5219 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:20.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.144:22-10.0.0.1:44584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:21.649155 update_engine[1618]: I20260122 01:05:21.649012 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 22 01:05:21.649155 update_engine[1618]: I20260122 01:05:21.649142 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 22 01:05:21.649790 update_engine[1618]: I20260122 01:05:21.649730 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 22 01:05:21.666006 update_engine[1618]: E20260122 01:05:21.665874 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 22 01:05:21.666062 update_engine[1618]: I20260122 01:05:21.666013 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 22 01:05:23.723214 kubelet[2860]: E0122 01:05:23.723107 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:05:24.725180 kubelet[2860]: E0122 01:05:24.725097 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:05:24.726612 kubelet[2860]: E0122 01:05:24.725720 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:05:24.728312 kubelet[2860]: E0122 01:05:24.728170 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:05:25.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.144:22-10.0.0.1:58636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:25.174447 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:58636.service - OpenSSH per-connection server daemon (10.0.0.1:58636). Jan 22 01:05:25.178294 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 01:05:25.178451 kernel: audit: type=1130 audit(1769043925.173:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.144:22-10.0.0.1:58636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:25.293704 kernel: audit: type=1101 audit(1769043925.277:800): pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.277000 audit[5240]: USER_ACCT pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.293916 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 58636 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:25.295728 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:25.303842 systemd-logind[1609]: New session 15 of user core. Jan 22 01:05:25.293000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.316511 kernel: audit: type=1103 audit(1769043925.293:801): pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.322157 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 22 01:05:25.331485 kernel: audit: type=1006 audit(1769043925.293:802): pid=5240 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 22 01:05:25.331592 kernel: audit: type=1300 audit(1769043925.293:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffede4e9760 a2=3 a3=0 items=0 ppid=1 pid=5240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:25.293000 audit[5240]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffede4e9760 a2=3 a3=0 items=0 ppid=1 pid=5240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:25.293000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:25.353280 kernel: audit: type=1327 audit(1769043925.293:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:25.353613 kernel: audit: type=1105 audit(1769043925.325:803): pid=5240 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.325000 audit[5240]: USER_START pid=5240 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.330000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.379285 kernel: audit: type=1103 audit(1769043925.330:804): pid=5243 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.473165 sshd[5243]: Connection closed by 10.0.0.1 port 58636 Jan 22 01:05:25.475686 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:25.476000 audit[5240]: USER_END pid=5240 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.482805 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:58636.service: Deactivated successfully. Jan 22 01:05:25.486888 systemd[1]: session-15.scope: Deactivated successfully. Jan 22 01:05:25.488507 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Jan 22 01:05:25.477000 audit[5240]: CRED_DISP pid=5240 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.490852 systemd-logind[1609]: Removed session 15. Jan 22 01:05:25.501465 kernel: audit: type=1106 audit(1769043925.476:805): pid=5240 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.501542 kernel: audit: type=1104 audit(1769043925.477:806): pid=5240 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:25.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.144:22-10.0.0.1:58636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:26.723639 kubelet[2860]: E0122 01:05:26.723541 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:05:29.720438 kubelet[2860]: E0122 01:05:29.720262 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:05:30.491969 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:58676.service - OpenSSH per-connection server daemon (10.0.0.1:58676). Jan 22 01:05:30.498520 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 01:05:30.498667 kernel: audit: type=1130 audit(1769043930.491:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.144:22-10.0.0.1:58676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:30.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.144:22-10.0.0.1:58676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:30.568000 audit[5257]: USER_ACCT pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.576569 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 58676 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:30.579273 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:30.577000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.587356 systemd-logind[1609]: New session 16 of user core. Jan 22 01:05:30.605183 kernel: audit: type=1101 audit(1769043930.568:809): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.605292 kernel: audit: type=1103 audit(1769043930.577:810): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.605441 kernel: audit: type=1006 audit(1769043930.577:811): pid=5257 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 22 01:05:30.577000 audit[5257]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff88f01270 a2=3 a3=0 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:30.631465 kernel: audit: type=1300 audit(1769043930.577:811): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff88f01270 a2=3 a3=0 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:30.631566 kernel: audit: type=1327 audit(1769043930.577:811): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:30.577000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:30.637262 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 22 01:05:30.640000 audit[5257]: USER_START pid=5257 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.643000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.663326 kernel: audit: type=1105 audit(1769043930.640:812): pid=5257 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.663502 kernel: audit: type=1103 audit(1769043930.643:813): pid=5260 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.720844 kubelet[2860]: E0122 01:05:30.720736 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:05:30.727212 kubelet[2860]: E0122 01:05:30.726964 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:05:30.781715 sshd[5260]: Connection closed by 10.0.0.1 port 58676 Jan 22 01:05:30.782194 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:30.782000 audit[5257]: USER_END pid=5257 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.783000 audit[5257]: CRED_DISP pid=5257 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.812748 kernel: audit: type=1106 audit(1769043930.782:814): pid=5257 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.813104 kernel: audit: type=1104 audit(1769043930.783:815): pid=5257 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.821594 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:58676.service: Deactivated successfully. Jan 22 01:05:30.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.144:22-10.0.0.1:58676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:30.827850 systemd[1]: session-16.scope: Deactivated successfully. Jan 22 01:05:30.832197 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Jan 22 01:05:30.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.144:22-10.0.0.1:58682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:30.845866 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:58682.service - OpenSSH per-connection server daemon (10.0.0.1:58682). Jan 22 01:05:30.854742 systemd-logind[1609]: Removed session 16. Jan 22 01:05:30.944000 audit[5275]: USER_ACCT pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.946050 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 58682 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:30.947000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.947000 audit[5275]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6df35230 a2=3 a3=0 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:30.947000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:30.949873 sshd-session[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:30.960717 systemd-logind[1609]: New session 17 of user core. Jan 22 01:05:30.967858 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 22 01:05:30.973000 audit[5275]: USER_START pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:30.979000 audit[5278]: CRED_ACQ pid=5278 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:31.318105 sshd[5278]: Connection closed by 10.0.0.1 port 58682 Jan 22 01:05:31.318917 sshd-session[5275]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:31.325000 audit[5275]: USER_END pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:31.325000 audit[5275]: CRED_DISP pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:31.336239 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:58682.service: Deactivated successfully. Jan 22 01:05:31.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.144:22-10.0.0.1:58682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:31.342154 systemd[1]: session-17.scope: Deactivated successfully. Jan 22 01:05:31.345306 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Jan 22 01:05:31.359129 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:58684.service - OpenSSH per-connection server daemon (10.0.0.1:58684). Jan 22 01:05:31.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.144:22-10.0.0.1:58684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:31.361589 systemd-logind[1609]: Removed session 17. Jan 22 01:05:31.440000 audit[5289]: USER_ACCT pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:31.442097 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 58684 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:31.442000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:31.442000 audit[5289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb2c47cd0 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:31.442000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:31.444923 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:31.455492 systemd-logind[1609]: New session 18 of user core. Jan 22 01:05:31.461883 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 22 01:05:31.466000 audit[5289]: USER_START pid=5289 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:31.470000 audit[5292]: CRED_ACQ pid=5292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:31.647663 update_engine[1618]: I20260122 01:05:31.647444 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 22 01:05:31.650603 update_engine[1618]: I20260122 01:05:31.648830 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 22 01:05:31.650603 update_engine[1618]: I20260122 01:05:31.649331 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 22 01:05:31.666516 update_engine[1618]: E20260122 01:05:31.666451 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 22 01:05:31.666787 update_engine[1618]: I20260122 01:05:31.666762 1618 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 22 01:05:32.212613 sshd[5292]: Connection closed by 10.0.0.1 port 58684 Jan 22 01:05:32.212977 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:32.213000 audit[5289]: USER_END pid=5289 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.213000 audit[5289]: CRED_DISP pid=5289 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.225067 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:58684.service: Deactivated successfully. Jan 22 01:05:32.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.144:22-10.0.0.1:58684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:32.229637 systemd[1]: session-18.scope: Deactivated successfully. Jan 22 01:05:32.234272 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Jan 22 01:05:32.237275 systemd-logind[1609]: Removed session 18. Jan 22 01:05:32.241826 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:58696.service - OpenSSH per-connection server daemon (10.0.0.1:58696). Jan 22 01:05:32.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.144:22-10.0.0.1:58696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:32.266000 audit[5313]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:05:32.266000 audit[5313]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffa3233b40 a2=0 a3=7fffa3233b2c items=0 ppid=3028 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:32.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:05:32.281000 audit[5313]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:05:32.281000 audit[5313]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffa3233b40 a2=0 a3=0 items=0 ppid=3028 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:32.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:05:32.309000 audit[5317]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:05:32.309000 audit[5317]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffd448afb0 a2=0 a3=7fffd448af9c items=0 ppid=3028 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:32.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:05:32.315000 audit[5317]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:05:32.315000 audit[5317]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd448afb0 a2=0 a3=0 items=0 ppid=3028 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:32.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:05:32.325000 audit[5312]: USER_ACCT pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.326796 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 58696 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:32.327000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.327000 audit[5312]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee3506480 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:32.327000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:32.329643 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:32.339952 systemd-logind[1609]: New session 19 of user core. Jan 22 01:05:32.346697 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 22 01:05:32.354000 audit[5312]: USER_START pid=5312 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.359000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.711547 sshd[5318]: Connection closed by 10.0.0.1 port 58696 Jan 22 01:05:32.713266 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:32.714000 audit[5312]: USER_END pid=5312 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.715000 audit[5312]: CRED_DISP pid=5312 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.730737 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:58708.service - OpenSSH per-connection server daemon (10.0.0.1:58708). Jan 22 01:05:32.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.144:22-10.0.0.1:58708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:32.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.144:22-10.0.0.1:58696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:32.731898 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:58696.service: Deactivated successfully. Jan 22 01:05:32.743078 systemd[1]: session-19.scope: Deactivated successfully. Jan 22 01:05:32.746565 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Jan 22 01:05:32.751787 systemd-logind[1609]: Removed session 19. Jan 22 01:05:32.824000 audit[5327]: USER_ACCT pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.826471 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 58708 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:32.826000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.826000 audit[5327]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc042dce0 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:32.826000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:32.830057 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:32.841897 systemd-logind[1609]: New session 20 of user core. Jan 22 01:05:32.846778 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 22 01:05:32.853000 audit[5327]: USER_START pid=5327 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:32.857000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:33.017073 sshd[5333]: Connection closed by 10.0.0.1 port 58708 Jan 22 01:05:33.018205 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:33.020000 audit[5327]: USER_END pid=5327 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:33.020000 audit[5327]: CRED_DISP pid=5327 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:33.025810 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:58708.service: Deactivated successfully. Jan 22 01:05:33.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.144:22-10.0.0.1:58708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:33.030247 systemd[1]: session-20.scope: Deactivated successfully. Jan 22 01:05:33.035116 systemd-logind[1609]: Session 20 logged out. Waiting for processes to exit. Jan 22 01:05:33.038078 systemd-logind[1609]: Removed session 20. Jan 22 01:05:35.722121 kubelet[2860]: E0122 01:05:35.722066 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5" Jan 22 01:05:35.724204 kubelet[2860]: E0122 01:05:35.723233 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:05:38.044653 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 22 01:05:38.044803 kernel: audit: type=1130 audit(1769043938.035:857): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.144:22-10.0.0.1:59170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:38.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.144:22-10.0.0.1:59170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:38.036351 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:59170.service - OpenSSH per-connection server daemon (10.0.0.1:59170). Jan 22 01:05:38.112000 audit[5354]: USER_ACCT pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.120762 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 59170 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:38.126496 kernel: audit: type=1101 audit(1769043938.112:858): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.125000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.128492 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:38.143494 kernel: audit: type=1103 audit(1769043938.125:859): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.143613 kernel: audit: type=1006 audit(1769043938.125:860): pid=5354 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 22 01:05:38.138805 systemd-logind[1609]: New session 21 of user core. Jan 22 01:05:38.125000 audit[5354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1cc99250 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:38.125000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:38.159577 kernel: audit: type=1300 audit(1769043938.125:860): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1cc99250 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:38.159842 kernel: audit: type=1327 audit(1769043938.125:860): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:38.160767 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 22 01:05:38.164000 audit[5354]: USER_START pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.177515 kernel: audit: type=1105 audit(1769043938.164:861): pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.177696 kernel: audit: type=1103 audit(1769043938.168:862): pid=5357 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.168000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.350314 sshd[5357]: Connection closed by 10.0.0.1 port 59170 Jan 22 01:05:38.352884 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:38.353000 audit[5354]: USER_END pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.358924 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:59170.service: Deactivated successfully. Jan 22 01:05:38.363084 systemd[1]: session-21.scope: Deactivated successfully. Jan 22 01:05:38.365551 systemd-logind[1609]: Session 21 logged out. Waiting for processes to exit. Jan 22 01:05:38.368619 systemd-logind[1609]: Removed session 21. Jan 22 01:05:38.371510 kernel: audit: type=1106 audit(1769043938.353:863): pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.353000 audit[5354]: CRED_DISP pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.144:22-10.0.0.1:59170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:38.382554 kernel: audit: type=1104 audit(1769043938.353:864): pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:38.721609 containerd[1637]: time="2026-01-22T01:05:38.721225901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 01:05:38.830314 containerd[1637]: time="2026-01-22T01:05:38.830261426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:05:38.832305 containerd[1637]: time="2026-01-22T01:05:38.832218737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 01:05:38.832875 containerd[1637]: time="2026-01-22T01:05:38.832507212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 01:05:38.833870 kubelet[2860]: E0122 01:05:38.833785 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:05:38.835226 kubelet[2860]: E0122 01:05:38.833879 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 01:05:38.835226 kubelet[2860]: E0122 01:05:38.834168 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pdb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-d659955cd-4lv6v_calico-system(410a0576-5e2f-4491-9946-abeea17a07fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 01:05:38.836066 kubelet[2860]: E0122 01:05:38.835804 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d659955cd-4lv6v" podUID="410a0576-5e2f-4491-9946-abeea17a07fc" Jan 22 01:05:39.677000 audit[5370]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:05:39.677000 audit[5370]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd42be1380 a2=0 a3=7ffd42be136c items=0 ppid=3028 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:39.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:05:39.686000 audit[5370]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 01:05:39.686000 audit[5370]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd42be1380 a2=0 a3=7ffd42be136c items=0 ppid=3028 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:39.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 01:05:39.721221 kubelet[2860]: E0122 01:05:39.721176 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-xxfmg" podUID="08120695-b0cc-4d57-901b-351d05e2677f" Jan 22 01:05:41.650559 update_engine[1618]: I20260122 01:05:41.650439 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 22 01:05:41.650559 update_engine[1618]: I20260122 01:05:41.650561 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 22 01:05:41.651253 update_engine[1618]: I20260122 01:05:41.651000 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 22 01:05:41.667559 update_engine[1618]: E20260122 01:05:41.667454 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 22 01:05:41.667741 update_engine[1618]: I20260122 01:05:41.667603 1618 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 22 01:05:41.667741 update_engine[1618]: I20260122 01:05:41.667619 1618 omaha_request_action.cc:617] Omaha request response: Jan 22 01:05:41.667800 update_engine[1618]: E20260122 01:05:41.667768 1618 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 22 01:05:41.667899 update_engine[1618]: I20260122 01:05:41.667821 1618 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 22 01:05:41.667899 update_engine[1618]: I20260122 01:05:41.667876 1618 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 22 01:05:41.667899 update_engine[1618]: I20260122 01:05:41.667886 1618 update_attempter.cc:306] Processing Done. Jan 22 01:05:41.667990 update_engine[1618]: E20260122 01:05:41.667903 1618 update_attempter.cc:619] Update failed. Jan 22 01:05:41.667990 update_engine[1618]: I20260122 01:05:41.667912 1618 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 22 01:05:41.667990 update_engine[1618]: I20260122 01:05:41.667919 1618 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 22 01:05:41.667990 update_engine[1618]: I20260122 01:05:41.667927 1618 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 22 01:05:41.668127 update_engine[1618]: I20260122 01:05:41.668092 1618 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 22 01:05:41.668154 update_engine[1618]: I20260122 01:05:41.668139 1618 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 22 01:05:41.668177 update_engine[1618]: I20260122 01:05:41.668154 1618 omaha_request_action.cc:272] Request: Jan 22 01:05:41.668177 update_engine[1618]: Jan 22 01:05:41.668177 update_engine[1618]: Jan 22 01:05:41.668177 update_engine[1618]: Jan 22 01:05:41.668177 update_engine[1618]: Jan 22 01:05:41.668177 update_engine[1618]: Jan 22 01:05:41.668177 update_engine[1618]: Jan 22 01:05:41.668177 update_engine[1618]: I20260122 01:05:41.668165 1618 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 22 01:05:41.668320 update_engine[1618]: I20260122 01:05:41.668201 1618 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 22 01:05:41.668762 update_engine[1618]: I20260122 01:05:41.668669 1618 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 22 01:05:41.668971 locksmithd[1679]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 22 01:05:41.686873 update_engine[1618]: E20260122 01:05:41.686681 1618 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 22 01:05:41.686873 update_engine[1618]: I20260122 01:05:41.686843 1618 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 22 01:05:41.686873 update_engine[1618]: I20260122 01:05:41.686858 1618 omaha_request_action.cc:617] Omaha request response: Jan 22 01:05:41.686873 update_engine[1618]: I20260122 01:05:41.686869 1618 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 22 01:05:41.686873 update_engine[1618]: I20260122 01:05:41.686877 1618 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 22 01:05:41.686873 update_engine[1618]: I20260122 01:05:41.686884 1618 update_attempter.cc:306] Processing Done. Jan 22 01:05:41.686873 update_engine[1618]: I20260122 01:05:41.686894 1618 update_attempter.cc:310] Error event sent. Jan 22 01:05:41.688668 update_engine[1618]: I20260122 01:05:41.686908 1618 update_check_scheduler.cc:74] Next update check in 43m36s Jan 22 01:05:41.688723 locksmithd[1679]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 22 01:05:41.724475 containerd[1637]: time="2026-01-22T01:05:41.723498695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 01:05:41.806562 containerd[1637]: time="2026-01-22T01:05:41.806333791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:05:41.809644 containerd[1637]: time="2026-01-22T01:05:41.809557882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 01:05:41.810130 containerd[1637]: time="2026-01-22T01:05:41.809889039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 01:05:41.810290 kubelet[2860]: E0122 01:05:41.810204 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:05:41.810290 kubelet[2860]: E0122 01:05:41.810263 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 01:05:41.811634 kubelet[2860]: E0122 01:05:41.810662 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gwr5j_calico-system(03592d59-bdcf-436c-be98-9c688e9b6f7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 01:05:41.811880 kubelet[2860]: E0122 01:05:41.811834 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gwr5j" podUID="03592d59-bdcf-436c-be98-9c688e9b6f7e" Jan 22 01:05:42.723108 containerd[1637]: time="2026-01-22T01:05:42.722843966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 01:05:42.788729 containerd[1637]: time="2026-01-22T01:05:42.788610107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:05:42.790920 containerd[1637]: time="2026-01-22T01:05:42.790743290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 01:05:42.790920 containerd[1637]: time="2026-01-22T01:05:42.790889953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 01:05:42.791293 kubelet[2860]: E0122 01:05:42.791208 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:05:42.791434 kubelet[2860]: E0122 01:05:42.791307 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 01:05:42.791553 kubelet[2860]: E0122 01:05:42.791496 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be94bbf4992b4f04a6ddb3aecaced976,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 01:05:42.794454 containerd[1637]: time="2026-01-22T01:05:42.794425578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 01:05:42.854689 containerd[1637]: time="2026-01-22T01:05:42.854600779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:05:42.856559 containerd[1637]: time="2026-01-22T01:05:42.856475722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 01:05:42.856660 containerd[1637]: time="2026-01-22T01:05:42.856583192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 01:05:42.857156 kubelet[2860]: E0122 01:05:42.856940 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:05:42.857156 kubelet[2860]: E0122 01:05:42.857076 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 01:05:42.857825 kubelet[2860]: E0122 01:05:42.857612 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6lms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589cfcc65-g4r68_calico-system(5adc9a7e-5d84-4421-b95f-9f8854c5ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 01:05:42.859215 kubelet[2860]: E0122 01:05:42.859109 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589cfcc65-g4r68" podUID="5adc9a7e-5d84-4421-b95f-9f8854c5ffaa" Jan 22 01:05:43.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.144:22-10.0.0.1:59202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:43.373578 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:59202.service - OpenSSH per-connection server daemon (10.0.0.1:59202). Jan 22 01:05:43.376593 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 22 01:05:43.377132 kernel: audit: type=1130 audit(1769043943.372:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.144:22-10.0.0.1:59202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:43.460837 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 59202 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:43.459000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.462961 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:43.460000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.485433 kernel: audit: type=1101 audit(1769043943.459:869): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.485532 kernel: audit: type=1103 audit(1769043943.460:870): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.485568 kernel: audit: type=1006 audit(1769043943.460:871): pid=5374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 22 01:05:43.482875 systemd-logind[1609]: New session 22 of user core. Jan 22 01:05:43.460000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc041ba8c0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:43.501208 kernel: audit: type=1300 audit(1769043943.460:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc041ba8c0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:43.460000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:43.505855 kernel: audit: type=1327 audit(1769043943.460:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:43.510807 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 22 01:05:43.514000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.514000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.545285 kernel: audit: type=1105 audit(1769043943.514:872): pid=5374 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.545469 kernel: audit: type=1103 audit(1769043943.514:873): pid=5377 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.655493 sshd[5377]: Connection closed by 10.0.0.1 port 59202 Jan 22 01:05:43.655937 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:43.656000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.662189 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:59202.service: Deactivated successfully. Jan 22 01:05:43.667896 systemd[1]: session-22.scope: Deactivated successfully. Jan 22 01:05:43.657000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.672160 systemd-logind[1609]: Session 22 logged out. Waiting for processes to exit. Jan 22 01:05:43.674088 systemd-logind[1609]: Removed session 22. Jan 22 01:05:43.680887 kernel: audit: type=1106 audit(1769043943.656:874): pid=5374 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.680958 kernel: audit: type=1104 audit(1769043943.657:875): pid=5374 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:43.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.144:22-10.0.0.1:59202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:47.720207 kubelet[2860]: E0122 01:05:47.720126 2860 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 01:05:47.721323 containerd[1637]: time="2026-01-22T01:05:47.721255488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 01:05:47.796439 containerd[1637]: time="2026-01-22T01:05:47.796171755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:05:47.798293 containerd[1637]: time="2026-01-22T01:05:47.798181578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 01:05:47.798293 containerd[1637]: time="2026-01-22T01:05:47.798262829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 01:05:47.798741 kubelet[2860]: E0122 01:05:47.798643 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:05:47.798841 kubelet[2860]: E0122 01:05:47.798740 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 01:05:47.799614 kubelet[2860]: E0122 01:05:47.799475 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 01:05:47.801989 containerd[1637]: time="2026-01-22T01:05:47.801961528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 01:05:47.872751 containerd[1637]: time="2026-01-22T01:05:47.872658958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:05:47.874241 containerd[1637]: time="2026-01-22T01:05:47.874168348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 01:05:47.874444 containerd[1637]: time="2026-01-22T01:05:47.874270609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 01:05:47.874626 kubelet[2860]: E0122 01:05:47.874571 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:05:47.874688 kubelet[2860]: E0122 01:05:47.874628 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 01:05:47.874788 kubelet[2860]: E0122 01:05:47.874731 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8grjm_calico-system(3eaef106-75d7-42c8-a82e-57ae58f4f9cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 01:05:47.876167 kubelet[2860]: E0122 01:05:47.876128 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8grjm" podUID="3eaef106-75d7-42c8-a82e-57ae58f4f9cb" Jan 22 01:05:48.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.144:22-10.0.0.1:41110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:48.675521 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:41110.service - OpenSSH per-connection server daemon (10.0.0.1:41110). Jan 22 01:05:48.678310 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 01:05:48.678540 kernel: audit: type=1130 audit(1769043948.674:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.144:22-10.0.0.1:41110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:48.762482 kernel: audit: type=1101 audit(1769043948.744:878): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.744000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.762775 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 41110 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 01:05:48.761000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.763981 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 01:05:48.775450 systemd-logind[1609]: New session 23 of user core. Jan 22 01:05:48.779509 kernel: audit: type=1103 audit(1769043948.761:879): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.779579 kernel: audit: type=1006 audit(1769043948.761:880): pid=5416 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 22 01:05:48.779611 kernel: audit: type=1300 audit(1769043948.761:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe84ce8b10 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:48.761000 audit[5416]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe84ce8b10 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 01:05:48.789836 kernel: audit: type=1327 audit(1769043948.761:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:48.761000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 01:05:48.794668 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 22 01:05:48.799000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.802000 audit[5419]: CRED_ACQ pid=5419 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.823856 kernel: audit: type=1105 audit(1769043948.799:881): pid=5416 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.823923 kernel: audit: type=1103 audit(1769043948.802:882): pid=5419 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.931797 sshd[5419]: Connection closed by 10.0.0.1 port 41110 Jan 22 01:05:48.933630 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jan 22 01:05:48.935000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.940807 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:41110.service: Deactivated successfully. Jan 22 01:05:48.944644 systemd[1]: session-23.scope: Deactivated successfully. Jan 22 01:05:48.949645 systemd-logind[1609]: Session 23 logged out. Waiting for processes to exit. Jan 22 01:05:48.956700 systemd-logind[1609]: Removed session 23. Jan 22 01:05:48.935000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.971200 kernel: audit: type=1106 audit(1769043948.935:883): pid=5416 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.971323 kernel: audit: type=1104 audit(1769043948.935:884): pid=5416 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 01:05:48.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.144:22-10.0.0.1:41110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 01:05:49.721561 containerd[1637]: time="2026-01-22T01:05:49.721444003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 01:05:49.780011 containerd[1637]: time="2026-01-22T01:05:49.779843454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 01:05:49.781673 containerd[1637]: time="2026-01-22T01:05:49.781491100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 01:05:49.781673 containerd[1637]: time="2026-01-22T01:05:49.781578653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 01:05:49.781994 kubelet[2860]: E0122 01:05:49.781823 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:05:49.781994 kubelet[2860]: E0122 01:05:49.781892 2860 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 01:05:49.782720 kubelet[2860]: E0122 01:05:49.782129 2860 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g2d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7968ffc4b-nnclr_calico-apiserver(3572850e-6c2e-4ed9-a568-75ca88ead4a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 01:05:49.783469 kubelet[2860]: E0122 01:05:49.783424 2860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7968ffc4b-nnclr" podUID="3572850e-6c2e-4ed9-a568-75ca88ead4a5"