Jan 15 00:27:56.279958 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 22:02:13 -00 2026 Jan 15 00:27:56.280011 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:27:56.280024 kernel: BIOS-provided physical RAM map: Jan 15 00:27:56.280031 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 15 00:27:56.280037 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 15 00:27:56.280043 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 15 00:27:56.280051 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 15 00:27:56.280057 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 15 00:27:56.280086 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 15 00:27:56.280093 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 15 00:27:56.280102 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 00:27:56.280109 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 15 00:27:56.280115 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 00:27:56.280121 kernel: NX (Execute Disable) protection: active Jan 15 00:27:56.280129 kernel: APIC: Static calls initialized Jan 15 00:27:56.280139 kernel: SMBIOS 2.8 present. Jan 15 00:27:56.280166 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 15 00:27:56.280173 kernel: DMI: Memory slots populated: 1/1 Jan 15 00:27:56.280229 kernel: Hypervisor detected: KVM Jan 15 00:27:56.280237 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 15 00:27:56.280243 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 00:27:56.280250 kernel: kvm-clock: using sched offset of 5967325030 cycles Jan 15 00:27:56.280258 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 00:27:56.280265 kernel: tsc: Detected 2445.426 MHz processor Jan 15 00:27:56.280277 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 00:27:56.280284 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 00:27:56.280292 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 15 00:27:56.280299 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 15 00:27:56.280306 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 00:27:56.280313 kernel: Using GB pages for direct mapping Jan 15 00:27:56.280320 kernel: ACPI: Early table checksum verification disabled Jan 15 00:27:56.280330 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 15 00:27:56.280338 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:27:56.280345 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:27:56.280352 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:27:56.280359 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 15 00:27:56.280366 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:27:56.280373 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:27:56.280383 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:27:56.280391 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:27:56.280402 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 15 00:27:56.280409 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 15 00:27:56.280417 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 15 00:27:56.280426 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 15 00:27:56.280434 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 15 00:27:56.280441 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 15 00:27:56.280448 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 15 00:27:56.280455 kernel: No NUMA configuration found Jan 15 00:27:56.280463 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 15 00:27:56.280470 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 15 00:27:56.280480 kernel: Zone ranges: Jan 15 00:27:56.280488 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 00:27:56.280495 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 15 00:27:56.280502 kernel: Normal empty Jan 15 00:27:56.280510 kernel: Device empty Jan 15 00:27:56.280517 kernel: Movable zone start for each node Jan 15 00:27:56.280524 kernel: Early memory node ranges Jan 15 00:27:56.280531 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 15 00:27:56.280541 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 15 00:27:56.280548 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 15 00:27:56.280556 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 00:27:56.280563 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 15 00:27:56.280594 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 15 00:27:56.280601 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 00:27:56.280609 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 00:27:56.280619 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 00:27:56.280627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 00:27:56.280654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 00:27:56.280691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 00:27:56.280699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 00:27:56.280706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 00:27:56.280714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 00:27:56.280724 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 15 00:27:56.280732 kernel: TSC deadline timer available Jan 15 00:27:56.280739 kernel: CPU topo: Max. logical packages: 1 Jan 15 00:27:56.280747 kernel: CPU topo: Max. logical dies: 1 Jan 15 00:27:56.280754 kernel: CPU topo: Max. dies per package: 1 Jan 15 00:27:56.280762 kernel: CPU topo: Max. threads per core: 1 Jan 15 00:27:56.280769 kernel: CPU topo: Num. cores per package: 4 Jan 15 00:27:56.280776 kernel: CPU topo: Num. threads per package: 4 Jan 15 00:27:56.280786 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 15 00:27:56.280793 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 00:27:56.280800 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 15 00:27:56.280808 kernel: kvm-guest: setup PV sched yield Jan 15 00:27:56.280815 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 15 00:27:56.280822 kernel: Booting paravirtualized kernel on KVM Jan 15 00:27:56.280830 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 00:27:56.280840 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 15 00:27:56.280848 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 15 00:27:56.280855 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 15 00:27:56.280862 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 15 00:27:56.280869 kernel: kvm-guest: PV spinlocks enabled Jan 15 00:27:56.280877 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 00:27:56.280885 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:27:56.280895 kernel: random: crng init done Jan 15 00:27:56.280902 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 00:27:56.280910 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 00:27:56.280917 kernel: Fallback order for Node 0: 0 Jan 15 00:27:56.280925 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 15 00:27:56.280932 kernel: Policy zone: DMA32 Jan 15 00:27:56.280940 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 00:27:56.280950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 15 00:27:56.280957 kernel: ftrace: allocating 40097 entries in 157 pages Jan 15 00:27:56.280965 kernel: ftrace: allocated 157 pages with 5 groups Jan 15 00:27:56.280972 kernel: Dynamic Preempt: voluntary Jan 15 00:27:56.280979 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 00:27:56.280992 kernel: rcu: RCU event tracing is enabled. Jan 15 00:27:56.281000 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 15 00:27:56.281007 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 00:27:56.281039 kernel: Rude variant of Tasks RCU enabled. Jan 15 00:27:56.281047 kernel: Tracing variant of Tasks RCU enabled. Jan 15 00:27:56.281054 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 00:27:56.281062 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 15 00:27:56.281069 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:27:56.281077 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:27:56.281084 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:27:56.281094 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 15 00:27:56.281102 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 00:27:56.281118 kernel: Console: colour VGA+ 80x25 Jan 15 00:27:56.281128 kernel: printk: legacy console [ttyS0] enabled Jan 15 00:27:56.281136 kernel: ACPI: Core revision 20240827 Jan 15 00:27:56.281143 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 15 00:27:56.281151 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 00:27:56.281158 kernel: x2apic enabled Jan 15 00:27:56.281166 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 00:27:56.281227 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 15 00:27:56.281236 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 15 00:27:56.281245 kernel: kvm-guest: setup PV IPIs Jan 15 00:27:56.281252 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 15 00:27:56.281264 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 00:27:56.281272 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 15 00:27:56.281280 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 00:27:56.281287 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 15 00:27:56.281295 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 15 00:27:56.281303 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 00:27:56.281311 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 00:27:56.281321 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 15 00:27:56.281328 kernel: Speculative Store Bypass: Vulnerable Jan 15 00:27:56.281336 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 15 00:27:56.281344 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 15 00:27:56.281352 kernel: active return thunk: srso_alias_return_thunk Jan 15 00:27:56.281360 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 15 00:27:56.281368 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 15 00:27:56.281378 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 15 00:27:56.281386 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 00:27:56.281393 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 00:27:56.281401 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 00:27:56.281409 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 00:27:56.281417 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 15 00:27:56.281425 kernel: Freeing SMP alternatives memory: 32K Jan 15 00:27:56.281435 kernel: pid_max: default: 32768 minimum: 301 Jan 15 00:27:56.281442 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 00:27:56.281450 kernel: landlock: Up and running. Jan 15 00:27:56.281458 kernel: SELinux: Initializing. Jan 15 00:27:56.281465 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 00:27:56.281473 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 00:27:56.281502 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 15 00:27:56.281513 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 15 00:27:56.281521 kernel: signal: max sigframe size: 1776 Jan 15 00:27:56.281528 kernel: rcu: Hierarchical SRCU implementation. Jan 15 00:27:56.281536 kernel: rcu: Max phase no-delay instances is 400. Jan 15 00:27:56.281544 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 00:27:56.281551 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 00:27:56.281559 kernel: smp: Bringing up secondary CPUs ... Jan 15 00:27:56.281569 kernel: smpboot: x86: Booting SMP configuration: Jan 15 00:27:56.281577 kernel: .... node #0, CPUs: #1 #2 #3 Jan 15 00:27:56.281585 kernel: smp: Brought up 1 node, 4 CPUs Jan 15 00:27:56.281592 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 15 00:27:56.281600 kernel: Memory: 2447344K/2571752K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15432K init, 2608K bss, 118472K reserved, 0K cma-reserved) Jan 15 00:27:56.281608 kernel: devtmpfs: initialized Jan 15 00:27:56.281615 kernel: x86/mm: Memory block size: 128MB Jan 15 00:27:56.281626 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 00:27:56.281634 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 15 00:27:56.281641 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 00:27:56.281649 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 00:27:56.281657 kernel: audit: initializing netlink subsys (disabled) Jan 15 00:27:56.281694 kernel: audit: type=2000 audit(1768436871.269:1): state=initialized audit_enabled=0 res=1 Jan 15 00:27:56.281701 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 00:27:56.281712 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 00:27:56.281719 kernel: cpuidle: using governor menu Jan 15 00:27:56.281727 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 00:27:56.281735 kernel: dca service started, version 1.12.1 Jan 15 00:27:56.281743 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 15 00:27:56.281750 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 15 00:27:56.281758 kernel: PCI: Using configuration type 1 for base access Jan 15 00:27:56.281768 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 00:27:56.281776 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 00:27:56.281784 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 00:27:56.281792 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 00:27:56.281799 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 00:27:56.281807 kernel: ACPI: Added _OSI(Module Device) Jan 15 00:27:56.281815 kernel: ACPI: Added _OSI(Processor Device) Jan 15 00:27:56.281825 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 00:27:56.281833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 00:27:56.281840 kernel: ACPI: Interpreter enabled Jan 15 00:27:56.281848 kernel: ACPI: PM: (supports S0 S3 S5) Jan 15 00:27:56.281855 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 00:27:56.281863 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 00:27:56.281871 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 00:27:56.281879 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 00:27:56.281889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 00:27:56.282173 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 00:27:56.282467 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 15 00:27:56.282723 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 15 00:27:56.282735 kernel: PCI host bridge to bus 0000:00 Jan 15 00:27:56.282950 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 00:27:56.283146 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 00:27:56.283404 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 00:27:56.283597 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 15 00:27:56.283827 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 15 00:27:56.284019 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 15 00:27:56.284274 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 00:27:56.284541 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 15 00:27:56.284803 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 15 00:27:56.285085 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 15 00:27:56.285360 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 15 00:27:56.285574 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 15 00:27:56.285822 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 00:27:56.286038 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 15 00:27:56.286319 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 15 00:27:56.286528 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 15 00:27:56.286772 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 15 00:27:56.286995 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 00:27:56.287295 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 15 00:27:56.287510 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 15 00:27:56.287760 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 15 00:27:56.287978 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 00:27:56.288253 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 15 00:27:56.288467 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 15 00:27:56.288707 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 15 00:27:56.289009 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 15 00:27:56.289425 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 15 00:27:56.289782 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 00:27:56.290011 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 15 00:27:56.290283 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 15 00:27:56.290542 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 15 00:27:56.290800 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 15 00:27:56.291127 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 15 00:27:56.291141 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 00:27:56.291155 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 00:27:56.291163 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 00:27:56.291171 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 00:27:56.291252 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 00:27:56.291262 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 00:27:56.291270 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 00:27:56.291278 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 00:27:56.291290 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 00:27:56.291298 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 00:27:56.291306 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 00:27:56.291314 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 00:27:56.291321 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 00:27:56.291329 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 00:27:56.291337 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 00:27:56.291348 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 00:27:56.291356 kernel: iommu: Default domain type: Translated Jan 15 00:27:56.291363 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 00:27:56.291371 kernel: PCI: Using ACPI for IRQ routing Jan 15 00:27:56.291379 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 00:27:56.291387 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 15 00:27:56.291395 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 15 00:27:56.291611 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 00:27:56.291856 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 00:27:56.292063 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 00:27:56.292074 kernel: vgaarb: loaded Jan 15 00:27:56.292082 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 15 00:27:56.292090 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 15 00:27:56.292098 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 00:27:56.292110 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 00:27:56.292118 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 00:27:56.292126 kernel: pnp: PnP ACPI init Jan 15 00:27:56.292405 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 15 00:27:56.292419 kernel: pnp: PnP ACPI: found 6 devices Jan 15 00:27:56.292428 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 00:27:56.292441 kernel: NET: Registered PF_INET protocol family Jan 15 00:27:56.292449 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 00:27:56.292457 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 00:27:56.292465 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 00:27:56.292472 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 00:27:56.292480 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 00:27:56.292488 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 00:27:56.292499 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 00:27:56.292507 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 00:27:56.292514 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 00:27:56.292522 kernel: NET: Registered PF_XDP protocol family Jan 15 00:27:56.292755 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 00:27:56.292951 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 00:27:56.293145 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 00:27:56.293407 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 15 00:27:56.293601 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 15 00:27:56.293833 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 15 00:27:56.293845 kernel: PCI: CLS 0 bytes, default 64 Jan 15 00:27:56.293853 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 00:27:56.293861 kernel: Initialise system trusted keyrings Jan 15 00:27:56.293869 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 00:27:56.293882 kernel: Key type asymmetric registered Jan 15 00:27:56.293890 kernel: Asymmetric key parser 'x509' registered Jan 15 00:27:56.293897 kernel: hrtimer: interrupt took 3178805 ns Jan 15 00:27:56.293905 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 00:27:56.293913 kernel: io scheduler mq-deadline registered Jan 15 00:27:56.293921 kernel: io scheduler kyber registered Jan 15 00:27:56.293929 kernel: io scheduler bfq registered Jan 15 00:27:56.293940 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 00:27:56.293948 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 00:27:56.293956 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 00:27:56.293964 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 00:27:56.293972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 00:27:56.293980 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 00:27:56.293988 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 00:27:56.293998 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 00:27:56.294033 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 00:27:56.294304 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 15 00:27:56.294318 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 00:27:56.294519 kernel: rtc_cmos 00:04: registered as rtc0 Jan 15 00:27:56.294759 kernel: rtc_cmos 00:04: setting system clock to 2026-01-15T00:27:53 UTC (1768436873) Jan 15 00:27:56.294967 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 15 00:27:56.294978 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 15 00:27:56.294986 kernel: NET: Registered PF_INET6 protocol family Jan 15 00:27:56.294994 kernel: Segment Routing with IPv6 Jan 15 00:27:56.295002 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 00:27:56.295010 kernel: NET: Registered PF_PACKET protocol family Jan 15 00:27:56.295017 kernel: Key type dns_resolver registered Jan 15 00:27:56.295025 kernel: IPI shorthand broadcast: enabled Jan 15 00:27:56.295037 kernel: sched_clock: Marking stable (2925021479, 615908382)->(3726885650, -185955789) Jan 15 00:27:56.295044 kernel: registered taskstats version 1 Jan 15 00:27:56.295052 kernel: Loading compiled-in X.509 certificates Jan 15 00:27:56.295060 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e8b6753a1cbf8103f5806ce5d59781743c62fae9' Jan 15 00:27:56.295068 kernel: Demotion targets for Node 0: null Jan 15 00:27:56.295075 kernel: Key type .fscrypt registered Jan 15 00:27:56.295083 kernel: Key type fscrypt-provisioning registered Jan 15 00:27:56.295094 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 00:27:56.295102 kernel: ima: Allocated hash algorithm: sha1 Jan 15 00:27:56.295109 kernel: ima: No architecture policies found Jan 15 00:27:56.295117 kernel: clk: Disabling unused clocks Jan 15 00:27:56.295125 kernel: Freeing unused kernel image (initmem) memory: 15432K Jan 15 00:27:56.295133 kernel: Write protecting the kernel read-only data: 45056k Jan 15 00:27:56.295141 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 15 00:27:56.295151 kernel: Run /init as init process Jan 15 00:27:56.295159 kernel: with arguments: Jan 15 00:27:56.295167 kernel: /init Jan 15 00:27:56.295175 kernel: with environment: Jan 15 00:27:56.295259 kernel: HOME=/ Jan 15 00:27:56.295268 kernel: TERM=linux Jan 15 00:27:56.295275 kernel: SCSI subsystem initialized Jan 15 00:27:56.295287 kernel: libata version 3.00 loaded. Jan 15 00:27:56.295513 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 00:27:56.295525 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 00:27:56.295800 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 15 00:27:56.296010 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 15 00:27:56.296303 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 00:27:56.296606 kernel: scsi host0: ahci Jan 15 00:27:56.296899 kernel: scsi host1: ahci Jan 15 00:27:56.297159 kernel: scsi host2: ahci Jan 15 00:27:56.297519 kernel: scsi host3: ahci Jan 15 00:27:56.297818 kernel: scsi host4: ahci Jan 15 00:27:56.298077 kernel: scsi host5: ahci Jan 15 00:27:56.298090 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 15 00:27:56.298099 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 15 00:27:56.298107 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 15 00:27:56.298116 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 15 00:27:56.298124 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 15 00:27:56.298132 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 15 00:27:56.298144 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 00:27:56.298153 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 00:27:56.298161 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 00:27:56.298169 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 15 00:27:56.298177 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 00:27:56.298241 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 00:27:56.298249 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 00:27:56.298261 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 15 00:27:56.298269 kernel: ata3.00: applying bridge limits Jan 15 00:27:56.298277 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 00:27:56.298285 kernel: ata3.00: configured for UDMA/100 Jan 15 00:27:56.298534 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 15 00:27:56.298798 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 15 00:27:56.299014 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 15 00:27:56.299025 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 00:27:56.299033 kernel: GPT:16515071 != 27000831 Jan 15 00:27:56.299041 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 00:27:56.299049 kernel: GPT:16515071 != 27000831 Jan 15 00:27:56.299057 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 00:27:56.299065 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 00:27:56.299361 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 15 00:27:56.299375 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 00:27:56.299600 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 15 00:27:56.299611 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 00:27:56.299619 kernel: device-mapper: uevent: version 1.0.3 Jan 15 00:27:56.299628 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 00:27:56.299640 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 15 00:27:56.299648 kernel: raid6: avx2x4 gen() 36183 MB/s Jan 15 00:27:56.299656 kernel: raid6: avx2x2 gen() 35594 MB/s Jan 15 00:27:56.299699 kernel: raid6: avx2x1 gen() 27155 MB/s Jan 15 00:27:56.299707 kernel: raid6: using algorithm avx2x4 gen() 36183 MB/s Jan 15 00:27:56.299715 kernel: raid6: .... xor() 4442 MB/s, rmw enabled Jan 15 00:27:56.299723 kernel: raid6: using avx2x2 recovery algorithm Jan 15 00:27:56.299732 kernel: xor: automatically using best checksumming function avx Jan 15 00:27:56.299747 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 00:27:56.299755 kernel: BTRFS: device fsid 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 15 00:27:56.299767 kernel: BTRFS info (device dm-0): first mount of filesystem 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 Jan 15 00:27:56.299775 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:27:56.299786 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 00:27:56.299794 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 00:27:56.299802 kernel: loop: module loaded Jan 15 00:27:56.299811 kernel: loop0: detected capacity change from 0 to 100160 Jan 15 00:27:56.299819 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 00:27:56.299828 systemd[1]: Successfully made /usr/ read-only. Jan 15 00:27:56.299839 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:27:56.299850 systemd[1]: Detected virtualization kvm. Jan 15 00:27:56.299859 systemd[1]: Detected architecture x86-64. Jan 15 00:27:56.299867 systemd[1]: Running in initrd. Jan 15 00:27:56.299875 systemd[1]: No hostname configured, using default hostname. Jan 15 00:27:56.299884 systemd[1]: Hostname set to . Jan 15 00:27:56.299895 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:27:56.299903 systemd[1]: Queued start job for default target initrd.target. Jan 15 00:27:56.299911 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:27:56.299920 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:27:56.299928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:27:56.299937 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 00:27:56.299946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:27:56.299957 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 00:27:56.299966 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 00:27:56.299974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:27:56.299983 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:27:56.299991 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:27:56.300000 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:27:56.300011 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:27:56.300020 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:27:56.300028 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:27:56.300036 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:27:56.300045 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:27:56.300053 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:27:56.300062 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 00:27:56.300073 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 00:27:56.300081 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:27:56.300090 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:27:56.300098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:27:56.300106 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:27:56.300115 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 00:27:56.300126 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 00:27:56.300135 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:27:56.300143 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 00:27:56.300152 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 00:27:56.300160 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 00:27:56.300168 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:27:56.300177 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:27:56.300242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:27:56.300250 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 00:27:56.300259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:27:56.300267 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 00:27:56.300341 systemd-journald[317]: Collecting audit messages is enabled. Jan 15 00:27:56.300390 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 00:27:56.300400 systemd-journald[317]: Journal started Jan 15 00:27:56.300426 systemd-journald[317]: Runtime Journal (/run/log/journal/f302bab8715e4275bfa326dce0392457) is 6M, max 48.2M, 42.2M free. Jan 15 00:27:56.304413 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:27:56.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.313258 kernel: audit: type=1130 audit(1768436876.304:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.313355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:27:56.345273 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 00:27:56.345329 kernel: audit: type=1130 audit(1768436876.344:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.344577 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:27:56.344946 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 00:27:56.373830 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:27:56.389603 kernel: Bridge firewalling registered Jan 15 00:27:56.384271 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 15 00:27:56.400107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:27:56.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.446373 kernel: audit: type=1130 audit(1768436876.431:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.446551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:27:56.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.711318 kernel: audit: type=1130 audit(1768436876.703:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.711303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:27:56.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.726498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:27:56.738541 kernel: audit: type=1130 audit(1768436876.718:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.743384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 00:27:56.754502 kernel: audit: type=1130 audit(1768436876.734:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.754983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:27:56.795710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:27:56.833324 kernel: audit: type=1130 audit(1768436876.795:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.833350 kernel: audit: type=1334 audit(1768436876.796:9): prog-id=6 op=LOAD Jan 15 00:27:56.833362 kernel: audit: type=1130 audit(1768436876.813:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.796000 audit: BPF prog-id=6 op=LOAD Jan 15 00:27:56.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.797901 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:27:56.813070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:27:56.815389 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 00:27:56.874449 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:27:56.915692 systemd-resolved[356]: Positive Trust Anchors: Jan 15 00:27:56.915727 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:27:56.915732 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:27:56.915760 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:27:56.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.942325 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 15 00:27:56.970337 kernel: audit: type=1130 audit(1768436876.955:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:56.944455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:27:56.956170 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:27:57.107340 kernel: Loading iSCSI transport class v2.0-870. Jan 15 00:27:57.126294 kernel: iscsi: registered transport (tcp) Jan 15 00:27:57.157259 kernel: iscsi: registered transport (qla4xxx) Jan 15 00:27:57.157418 kernel: QLogic iSCSI HBA Driver Jan 15 00:27:57.197873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:27:57.277149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:27:57.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.279524 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:27:57.374267 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 00:27:57.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.379958 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 00:27:57.385948 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 00:27:57.440601 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:27:57.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.445000 audit: BPF prog-id=7 op=LOAD Jan 15 00:27:57.445000 audit: BPF prog-id=8 op=LOAD Jan 15 00:27:57.446529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:27:57.489736 systemd-udevd[600]: Using default interface naming scheme 'v257'. Jan 15 00:27:57.508997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:27:57.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.519865 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 00:27:57.560465 dracut-pre-trigger[669]: rd.md=0: removing MD RAID activation Jan 15 00:27:57.572002 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:27:57.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.574000 audit: BPF prog-id=9 op=LOAD Jan 15 00:27:57.575274 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:27:57.609428 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:27:57.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.611901 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:27:57.649647 systemd-networkd[715]: lo: Link UP Jan 15 00:27:57.649702 systemd-networkd[715]: lo: Gained carrier Jan 15 00:27:57.654718 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:27:57.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.658417 systemd[1]: Reached target network.target - Network. Jan 15 00:27:57.953782 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:27:57.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:57.963370 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 00:27:58.030840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 00:27:58.049850 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 00:27:58.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:58.074775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 00:27:58.100453 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 00:27:58.122334 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 00:27:58.129273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:27:58.140722 systemd-networkd[715]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:27:58.152455 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 15 00:27:58.144254 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 00:27:58.145795 systemd-networkd[715]: eth0: Link UP Jan 15 00:27:58.146147 systemd-networkd[715]: eth0: Gained carrier Jan 15 00:27:58.146159 systemd-networkd[715]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:27:58.182568 kernel: AES CTR mode by8 optimization enabled Jan 15 00:27:58.154746 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:27:58.161350 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:27:58.163734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:27:58.173161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 00:27:58.185516 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 00:27:58.207122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:27:58.209101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:27:58.213557 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 00:27:58.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:58.239080 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:27:58.257798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:27:58.270921 disk-uuid[844]: Primary Header is updated. Jan 15 00:27:58.270921 disk-uuid[844]: Secondary Entries is updated. Jan 15 00:27:58.270921 disk-uuid[844]: Secondary Header is updated. Jan 15 00:27:58.282374 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:27:58.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:58.507541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:27:58.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:59.330025 disk-uuid[850]: Warning: The kernel is still using the old partition table. Jan 15 00:27:59.330025 disk-uuid[850]: The new table will be used at the next reboot or after you Jan 15 00:27:59.330025 disk-uuid[850]: run partprobe(8) or kpartx(8) Jan 15 00:27:59.330025 disk-uuid[850]: The operation has completed successfully. Jan 15 00:27:59.347514 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 00:27:59.347741 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 00:27:59.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:59.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:59.352584 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 00:27:59.409330 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Jan 15 00:27:59.415800 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:27:59.415834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:27:59.424643 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:27:59.424719 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:27:59.436293 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:27:59.438962 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 00:27:59.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:27:59.443918 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 00:27:59.870511 ignition[886]: Ignition 2.22.0 Jan 15 00:27:59.870561 ignition[886]: Stage: fetch-offline Jan 15 00:27:59.870770 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:27:59.870787 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:27:59.870986 ignition[886]: parsed url from cmdline: "" Jan 15 00:27:59.870991 ignition[886]: no config URL provided Jan 15 00:27:59.870997 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 00:27:59.871009 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 15 00:27:59.871088 ignition[886]: op(1): [started] loading QEMU firmware config module Jan 15 00:27:59.871094 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 15 00:27:59.901894 ignition[886]: op(1): [finished] loading QEMU firmware config module Jan 15 00:28:00.158312 ignition[886]: parsing config with SHA512: eb90a1bd18c35e2b998196a4a69512534c0b6e7ed220af2e91ddebf54e85294be4fae75d555c9400785e643bc41394b63e1df4f81f0ea1ff3622e97bc742ce98 Jan 15 00:28:00.170576 systemd-networkd[715]: eth0: Gained IPv6LL Jan 15 00:28:00.176980 unknown[886]: fetched base config from "system" Jan 15 00:28:00.177136 unknown[886]: fetched user config from "qemu" Jan 15 00:28:00.177633 ignition[886]: fetch-offline: fetch-offline passed Jan 15 00:28:00.177768 ignition[886]: Ignition finished successfully Jan 15 00:28:00.191138 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:28:00.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:00.199503 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 15 00:28:00.201027 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 00:28:00.315115 ignition[897]: Ignition 2.22.0 Jan 15 00:28:00.315173 ignition[897]: Stage: kargs Jan 15 00:28:00.315414 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:28:00.315427 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:28:00.317360 ignition[897]: kargs: kargs passed Jan 15 00:28:00.317420 ignition[897]: Ignition finished successfully Jan 15 00:28:00.342171 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 00:28:00.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:00.344093 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 00:28:00.840534 ignition[905]: Ignition 2.22.0 Jan 15 00:28:00.840571 ignition[905]: Stage: disks Jan 15 00:28:00.841728 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:28:00.841750 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:28:00.845734 ignition[905]: disks: disks passed Jan 15 00:28:00.845855 ignition[905]: Ignition finished successfully Jan 15 00:28:00.865145 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 00:28:00.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:00.874033 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 00:28:00.883730 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 00:28:00.889348 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:28:00.899118 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:28:00.906951 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:28:00.915675 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 00:28:00.982275 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 15 00:28:00.990287 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 00:28:00.993367 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 00:28:00.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:01.149305 kernel: EXT4-fs (vda9): mounted filesystem 6f459a58-5046-4124-bfbc-09321f1e67d8 r/w with ordered data mode. Quota mode: none. Jan 15 00:28:01.150811 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 00:28:01.151825 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 00:28:01.157484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:28:01.163739 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 00:28:01.167978 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 00:28:01.196015 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Jan 15 00:28:01.196038 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:28:01.196051 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:28:01.168021 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 00:28:01.168047 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:28:01.179576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 00:28:01.187675 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 00:28:01.225841 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:28:01.225881 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:28:01.228831 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:28:01.284869 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 00:28:01.294013 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Jan 15 00:28:01.300547 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 00:28:01.309982 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 00:28:01.461607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 00:28:01.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:01.470387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 00:28:01.484067 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 15 00:28:01.484092 kernel: audit: type=1130 audit(1768436881.468:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:01.491374 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 00:28:01.502608 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 00:28:01.509641 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:28:01.707671 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 00:28:01.720875 kernel: audit: type=1130 audit(1768436881.710:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:01.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:01.779260 ignition[1036]: INFO : Ignition 2.22.0 Jan 15 00:28:01.779260 ignition[1036]: INFO : Stage: mount Jan 15 00:28:01.785810 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:28:01.785810 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:28:01.796612 ignition[1036]: INFO : mount: mount passed Jan 15 00:28:01.800031 ignition[1036]: INFO : Ignition finished successfully Jan 15 00:28:01.806917 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 00:28:01.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:01.813169 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 00:28:01.819905 kernel: audit: type=1130 audit(1768436881.811:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:02.153963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:28:02.201499 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Jan 15 00:28:02.201556 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:28:02.201573 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:28:02.215718 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:28:02.215808 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:28:02.219111 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:28:02.502280 ignition[1067]: INFO : Ignition 2.22.0 Jan 15 00:28:02.502280 ignition[1067]: INFO : Stage: files Jan 15 00:28:02.502280 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:28:02.502280 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:28:02.517841 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jan 15 00:28:02.517841 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 00:28:02.517841 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 00:28:02.517841 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 00:28:02.517841 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 00:28:02.517841 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 00:28:02.517374 unknown[1067]: wrote ssh authorized keys file for user: core Jan 15 00:28:02.551080 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:28:02.551080 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 15 00:28:02.596571 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 00:28:02.733808 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:28:02.743597 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 00:28:02.766439 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 00:28:02.766439 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:28:02.766439 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:28:02.766439 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:28:02.798766 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 15 00:28:03.344546 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 00:28:05.126654 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:28:05.126654 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 00:28:05.143657 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:28:05.152575 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:28:05.152575 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 00:28:05.152575 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 15 00:28:05.152575 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 00:28:05.152575 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 00:28:05.152575 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 15 00:28:05.152575 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 15 00:28:05.215086 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 00:28:05.230112 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 00:28:05.235825 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 15 00:28:05.235825 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 15 00:28:05.235825 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 00:28:05.235825 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:28:05.235825 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:28:05.235825 ignition[1067]: INFO : files: files passed Jan 15 00:28:05.235825 ignition[1067]: INFO : Ignition finished successfully Jan 15 00:28:05.284364 kernel: audit: type=1130 audit(1768436885.239:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.235665 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 00:28:05.243470 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 00:28:05.253697 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 00:28:05.321131 kernel: audit: type=1130 audit(1768436885.295:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.321160 kernel: audit: type=1131 audit(1768436885.295:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.289586 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 00:28:05.289834 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 00:28:05.328251 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Jan 15 00:28:05.336872 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:28:05.336872 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:28:05.361998 kernel: audit: type=1130 audit(1768436885.346:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.362086 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:28:05.340946 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:28:05.347151 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 00:28:05.423351 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 00:28:05.563606 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 00:28:05.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.563921 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 00:28:05.594405 kernel: audit: type=1130 audit(1768436885.571:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.594442 kernel: audit: type=1131 audit(1768436885.571:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.572794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 00:28:05.598261 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 00:28:05.609314 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 00:28:05.615876 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 00:28:05.675109 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:28:05.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.686447 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 00:28:05.701898 kernel: audit: type=1130 audit(1768436885.683:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.721622 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:28:05.721914 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:28:05.742834 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:28:05.748264 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 00:28:05.760037 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 00:28:05.760400 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:28:05.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.772923 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 00:28:05.773333 systemd[1]: Stopped target basic.target - Basic System. Jan 15 00:28:05.783441 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 00:28:05.789933 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:28:05.796899 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 00:28:05.804264 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:28:05.811446 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 00:28:05.821379 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:28:05.825642 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 00:28:05.834404 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 00:28:05.840700 systemd[1]: Stopped target swap.target - Swaps. Jan 15 00:28:05.846407 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 00:28:05.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.846543 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:28:05.855636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:28:05.860881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:28:05.877372 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 00:28:05.880902 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:28:05.885430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 00:28:05.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.885690 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 00:28:05.901430 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 00:28:05.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.901685 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:28:05.912271 systemd[1]: Stopped target paths.target - Path Units. Jan 15 00:28:05.920861 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 00:28:05.927430 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:28:05.951660 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 00:28:05.960436 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 00:28:05.965012 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 00:28:05.965329 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:28:05.968509 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 00:28:05.968678 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:28:05.977014 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 15 00:28:06.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.977276 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:28:05.984962 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 00:28:06.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:05.985173 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:28:06.002329 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 00:28:06.002531 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 00:28:06.011863 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 00:28:06.027465 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 00:28:06.032349 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 00:28:06.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.032511 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:28:06.033302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 00:28:06.033412 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:28:06.033991 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 00:28:06.034088 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:28:06.057396 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 00:28:06.105120 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 00:28:06.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.158081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 00:28:06.162330 ignition[1125]: INFO : Ignition 2.22.0 Jan 15 00:28:06.162330 ignition[1125]: INFO : Stage: umount Jan 15 00:28:06.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.164375 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 00:28:06.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.177939 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:28:06.177939 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:28:06.177939 ignition[1125]: INFO : umount: umount passed Jan 15 00:28:06.177939 ignition[1125]: INFO : Ignition finished successfully Jan 15 00:28:06.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.164585 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 00:28:06.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.169249 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 00:28:06.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.169488 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 00:28:06.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.175434 systemd[1]: Stopped target network.target - Network. Jan 15 00:28:06.181326 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 00:28:06.181423 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 00:28:06.194946 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 00:28:06.195088 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 00:28:06.201050 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 00:28:06.201144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 00:28:06.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.204615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 00:28:06.204691 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 00:28:06.211030 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 00:28:06.211114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 00:28:06.272000 audit: BPF prog-id=9 op=UNLOAD Jan 15 00:28:06.217961 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 00:28:06.223944 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 00:28:06.240325 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 00:28:06.240668 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 00:28:06.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.256455 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 00:28:06.257096 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 00:28:06.257178 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:28:06.273254 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 00:28:06.275675 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 00:28:06.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.275803 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:28:06.289474 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:28:06.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.299593 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 00:28:06.370000 audit: BPF prog-id=6 op=UNLOAD Jan 15 00:28:06.310538 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 00:28:06.349299 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 00:28:06.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.350860 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:28:06.372307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 00:28:06.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.372403 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 00:28:06.381108 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 00:28:06.381159 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:28:06.386645 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 00:28:06.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.386783 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:28:06.402424 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 00:28:06.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.402504 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 00:28:06.425799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 00:28:06.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.426002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:28:06.487534 kernel: kauditd_printk_skb: 29 callbacks suppressed Jan 15 00:28:06.487567 kernel: audit: type=1131 audit(1768436886.474:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.441169 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 00:28:06.509600 kernel: audit: type=1131 audit(1768436886.495:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.448062 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 00:28:06.537505 kernel: audit: type=1131 audit(1768436886.517:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.448283 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:28:06.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.457336 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 00:28:06.457603 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:28:06.599668 kernel: audit: type=1131 audit(1768436886.544:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.599792 kernel: audit: type=1131 audit(1768436886.565:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.599808 kernel: audit: type=1130 audit(1768436886.578:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.599821 kernel: audit: type=1131 audit(1768436886.578:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.464004 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 00:28:06.464104 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 00:28:06.476649 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 00:28:06.478619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:28:06.496124 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 00:28:06.496276 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:28:06.523498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 00:28:06.531101 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:28:06.545115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:28:06.546935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:28:06.655079 kernel: audit: type=1131 audit(1768436886.643:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:06.570094 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 00:28:06.570292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 00:28:06.633112 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 00:28:06.633333 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 00:28:06.655124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 00:28:06.656976 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 00:28:06.690658 systemd[1]: Switching root. Jan 15 00:28:06.735706 systemd-journald[317]: Journal stopped Jan 15 00:28:08.790844 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Jan 15 00:28:08.791035 kernel: audit: type=1335 audit(1768436886.744:81): pid=317 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 15 00:28:08.791107 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 00:28:08.791130 kernel: SELinux: policy capability open_perms=1 Jan 15 00:28:08.791173 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 00:28:08.791289 kernel: SELinux: policy capability always_check_network=0 Jan 15 00:28:08.791310 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 00:28:08.791322 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 00:28:08.791335 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 00:28:08.791347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 00:28:08.791365 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 00:28:08.791420 kernel: audit: type=1403 audit(1768436886.970:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 00:28:08.791439 systemd[1]: Successfully loaded SELinux policy in 99.213ms. Jan 15 00:28:08.791460 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.258ms. Jan 15 00:28:08.791474 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:28:08.791488 systemd[1]: Detected virtualization kvm. Jan 15 00:28:08.791500 systemd[1]: Detected architecture x86-64. Jan 15 00:28:08.791513 systemd[1]: Detected first boot. Jan 15 00:28:08.791557 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:28:08.794914 zram_generator::config[1170]: No configuration found. Jan 15 00:28:08.795340 kernel: Guest personality initialized and is inactive Jan 15 00:28:08.795356 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 15 00:28:08.795368 kernel: Initialized host personality Jan 15 00:28:08.795380 kernel: NET: Registered PF_VSOCK protocol family Jan 15 00:28:08.795392 systemd[1]: Populated /etc with preset unit settings. Jan 15 00:28:08.797543 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 00:28:08.797717 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 00:28:08.797774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 00:28:08.797816 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 00:28:08.797829 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 00:28:08.797842 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 00:28:08.797854 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 00:28:08.797900 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 00:28:08.797913 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 00:28:08.797926 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 00:28:08.797944 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 00:28:08.797956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:28:08.797969 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:28:08.797981 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 00:28:08.798039 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 00:28:08.798053 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 00:28:08.798065 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:28:08.798078 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 00:28:08.798090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:28:08.798103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:28:08.798144 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 00:28:08.798157 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 00:28:08.798170 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 00:28:08.798288 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 00:28:08.798314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:28:08.798329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:28:08.798342 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 15 00:28:08.798394 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:28:08.798408 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:28:08.798421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 00:28:08.798433 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 00:28:08.798445 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 00:28:08.798458 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:28:08.798470 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 15 00:28:08.798523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:28:08.798549 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 15 00:28:08.798569 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 15 00:28:08.798590 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:28:08.798611 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:28:08.798632 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 00:28:08.798652 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 00:28:08.798712 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 00:28:08.798766 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 00:28:08.798779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:08.798791 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 00:28:08.798803 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 00:28:08.798816 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 00:28:08.798829 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 00:28:08.798871 systemd[1]: Reached target machines.target - Containers. Jan 15 00:28:08.798884 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 00:28:08.798897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:28:08.798909 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:28:08.798922 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 00:28:08.798934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:28:08.798971 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:28:08.799009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:28:08.799024 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 00:28:08.799036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:28:08.799049 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 00:28:08.802542 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 00:28:08.802675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 00:28:08.802767 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 00:28:08.802783 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 00:28:08.802796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:28:08.802809 kernel: fuse: init (API version 7.41) Jan 15 00:28:08.802853 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:28:08.802866 kernel: ACPI: bus type drm_connector registered Jan 15 00:28:08.802879 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:28:08.802891 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:28:08.802904 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 00:28:08.802917 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 00:28:08.802929 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:28:08.803092 systemd-journald[1256]: Collecting audit messages is enabled. Jan 15 00:28:08.803119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:08.803134 systemd-journald[1256]: Journal started Jan 15 00:28:08.803154 systemd-journald[1256]: Runtime Journal (/run/log/journal/f302bab8715e4275bfa326dce0392457) is 6M, max 48.2M, 42.2M free. Jan 15 00:28:08.373000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 00:28:08.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.699000 audit: BPF prog-id=14 op=UNLOAD Jan 15 00:28:08.699000 audit: BPF prog-id=13 op=UNLOAD Jan 15 00:28:08.700000 audit: BPF prog-id=15 op=LOAD Jan 15 00:28:08.701000 audit: BPF prog-id=16 op=LOAD Jan 15 00:28:08.701000 audit: BPF prog-id=17 op=LOAD Jan 15 00:28:08.782000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 00:28:08.782000 audit[1256]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd235508a0 a2=4000 a3=0 items=0 ppid=1 pid=1256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:08.782000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 15 00:28:07.920628 systemd[1]: Queued start job for default target multi-user.target. Jan 15 00:28:07.948366 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 00:28:07.953114 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 00:28:08.817863 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:28:08.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.834028 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 00:28:08.840289 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 00:28:08.847552 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 00:28:08.855716 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 00:28:08.876977 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 00:28:08.906885 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 00:28:08.946947 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 00:28:08.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.979948 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:28:08.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.985567 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 00:28:08.985988 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 00:28:08.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.991358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:28:08.991699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:28:08.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:08.996960 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:28:08.997401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:28:09.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.002490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:28:09.002875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:28:09.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.008545 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 00:28:09.008940 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 00:28:09.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.013660 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:28:09.014035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:28:09.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.019537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:28:09.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.024775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:28:09.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.032582 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 00:28:09.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.038312 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 00:28:09.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.061863 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:28:09.069248 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 15 00:28:09.076655 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 00:28:09.083289 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 00:28:09.088375 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 00:28:09.088477 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:28:09.096389 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 00:28:09.102029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:28:09.102266 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:28:09.105368 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 00:28:09.111377 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 00:28:09.115906 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:28:09.128521 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 00:28:09.134132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:28:09.136976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:28:09.146155 systemd-journald[1256]: Time spent on flushing to /var/log/journal/f302bab8715e4275bfa326dce0392457 is 27.264ms for 1106 entries. Jan 15 00:28:09.146155 systemd-journald[1256]: System Journal (/var/log/journal/f302bab8715e4275bfa326dce0392457) is 8M, max 163.5M, 155.5M free. Jan 15 00:28:09.202052 systemd-journald[1256]: Received client request to flush runtime journal. Jan 15 00:28:09.202387 kernel: loop1: detected capacity change from 0 to 111544 Jan 15 00:28:09.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.146462 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 00:28:09.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.169541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 00:28:09.180595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:28:09.187540 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 00:28:09.192702 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 00:28:09.197808 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 00:28:09.207533 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 00:28:09.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.215655 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 00:28:09.222047 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 00:28:09.239684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:28:09.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.254079 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 15 00:28:09.254118 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 15 00:28:09.263605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:28:09.270284 kernel: loop2: detected capacity change from 0 to 224512 Jan 15 00:28:09.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.271924 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 00:28:09.291818 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 00:28:09.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.320311 kernel: loop3: detected capacity change from 0 to 119256 Jan 15 00:28:09.369465 kernel: loop4: detected capacity change from 0 to 111544 Jan 15 00:28:09.568463 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 00:28:09.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.578000 audit: BPF prog-id=18 op=LOAD Jan 15 00:28:09.578000 audit: BPF prog-id=19 op=LOAD Jan 15 00:28:09.578000 audit: BPF prog-id=20 op=LOAD Jan 15 00:28:09.580340 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 15 00:28:09.583267 kernel: loop5: detected capacity change from 0 to 224512 Jan 15 00:28:09.591082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:28:09.589000 audit: BPF prog-id=21 op=LOAD Jan 15 00:28:09.600351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:28:09.619351 kernel: loop6: detected capacity change from 0 to 119256 Jan 15 00:28:09.628000 audit: BPF prog-id=22 op=LOAD Jan 15 00:28:09.629000 audit: BPF prog-id=23 op=LOAD Jan 15 00:28:09.629000 audit: BPF prog-id=24 op=LOAD Jan 15 00:28:09.635284 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 15 00:28:09.641000 audit: BPF prog-id=25 op=LOAD Jan 15 00:28:09.641000 audit: BPF prog-id=26 op=LOAD Jan 15 00:28:09.641000 audit: BPF prog-id=27 op=LOAD Jan 15 00:28:09.643840 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 00:28:09.650942 (sd-merge)[1313]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 15 00:28:09.661425 (sd-merge)[1313]: Merged extensions into '/usr'. Jan 15 00:28:09.664174 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 15 00:28:09.664269 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 15 00:28:09.672939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:28:09.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:09.679979 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 00:28:09.680006 systemd[1]: Reloading... Jan 15 00:28:09.725509 systemd-nsresourced[1318]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 15 00:28:09.767269 zram_generator::config[1361]: No configuration found. Jan 15 00:28:10.081811 systemd-oomd[1315]: No swap; memory pressure usage will be degraded Jan 15 00:28:10.139564 systemd-resolved[1316]: Positive Trust Anchors: Jan 15 00:28:10.139620 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:28:10.139629 systemd-resolved[1316]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:28:10.139678 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:28:10.154576 systemd-resolved[1316]: Defaulting to hostname 'linux'. Jan 15 00:28:10.313458 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 00:28:10.313706 systemd[1]: Reloading finished in 632 ms. Jan 15 00:28:10.343812 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 00:28:10.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:10.349576 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 15 00:28:10.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:10.356352 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 15 00:28:10.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:10.363351 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:28:10.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:10.370553 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 00:28:10.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:10.385503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:28:10.411041 systemd[1]: Starting ensure-sysext.service... Jan 15 00:28:10.417362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:28:10.426000 audit: BPF prog-id=28 op=LOAD Jan 15 00:28:10.432000 audit: BPF prog-id=21 op=UNLOAD Jan 15 00:28:10.432000 audit: BPF prog-id=29 op=LOAD Jan 15 00:28:10.432000 audit: BPF prog-id=22 op=UNLOAD Jan 15 00:28:10.432000 audit: BPF prog-id=30 op=LOAD Jan 15 00:28:10.432000 audit: BPF prog-id=31 op=LOAD Jan 15 00:28:10.432000 audit: BPF prog-id=23 op=UNLOAD Jan 15 00:28:10.432000 audit: BPF prog-id=24 op=UNLOAD Jan 15 00:28:10.437000 audit: BPF prog-id=32 op=LOAD Jan 15 00:28:10.437000 audit: BPF prog-id=15 op=UNLOAD Jan 15 00:28:10.437000 audit: BPF prog-id=33 op=LOAD Jan 15 00:28:10.437000 audit: BPF prog-id=34 op=LOAD Jan 15 00:28:10.437000 audit: BPF prog-id=16 op=UNLOAD Jan 15 00:28:10.437000 audit: BPF prog-id=17 op=UNLOAD Jan 15 00:28:10.438000 audit: BPF prog-id=35 op=LOAD Jan 15 00:28:10.438000 audit: BPF prog-id=25 op=UNLOAD Jan 15 00:28:10.438000 audit: BPF prog-id=36 op=LOAD Jan 15 00:28:10.438000 audit: BPF prog-id=37 op=LOAD Jan 15 00:28:10.438000 audit: BPF prog-id=26 op=UNLOAD Jan 15 00:28:10.438000 audit: BPF prog-id=27 op=UNLOAD Jan 15 00:28:10.441000 audit: BPF prog-id=38 op=LOAD Jan 15 00:28:10.441000 audit: BPF prog-id=18 op=UNLOAD Jan 15 00:28:10.441000 audit: BPF prog-id=39 op=LOAD Jan 15 00:28:10.441000 audit: BPF prog-id=40 op=LOAD Jan 15 00:28:10.441000 audit: BPF prog-id=19 op=UNLOAD Jan 15 00:28:10.441000 audit: BPF prog-id=20 op=UNLOAD Jan 15 00:28:10.448490 systemd[1]: Reload requested from client PID 1399 ('systemctl') (unit ensure-sysext.service)... Jan 15 00:28:10.448507 systemd[1]: Reloading... Jan 15 00:28:10.707413 zram_generator::config[1427]: No configuration found. Jan 15 00:28:10.731873 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 00:28:10.732476 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 00:28:10.733049 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 00:28:10.736722 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Jan 15 00:28:10.736965 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Jan 15 00:28:10.748141 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:28:10.748282 systemd-tmpfiles[1400]: Skipping /boot Jan 15 00:28:10.770292 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:28:10.770331 systemd-tmpfiles[1400]: Skipping /boot Jan 15 00:28:10.943662 systemd[1]: Reloading finished in 494 ms. Jan 15 00:28:10.968159 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 00:28:10.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:10.973387 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:28:10.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:10.980000 audit: BPF prog-id=41 op=LOAD Jan 15 00:28:10.981000 audit: BPF prog-id=38 op=UNLOAD Jan 15 00:28:10.981000 audit: BPF prog-id=42 op=LOAD Jan 15 00:28:10.981000 audit: BPF prog-id=43 op=LOAD Jan 15 00:28:10.981000 audit: BPF prog-id=39 op=UNLOAD Jan 15 00:28:10.981000 audit: BPF prog-id=40 op=UNLOAD Jan 15 00:28:10.982000 audit: BPF prog-id=44 op=LOAD Jan 15 00:28:10.982000 audit: BPF prog-id=32 op=UNLOAD Jan 15 00:28:10.982000 audit: BPF prog-id=45 op=LOAD Jan 15 00:28:10.983000 audit: BPF prog-id=46 op=LOAD Jan 15 00:28:10.983000 audit: BPF prog-id=33 op=UNLOAD Jan 15 00:28:10.983000 audit: BPF prog-id=34 op=UNLOAD Jan 15 00:28:10.984000 audit: BPF prog-id=47 op=LOAD Jan 15 00:28:10.984000 audit: BPF prog-id=28 op=UNLOAD Jan 15 00:28:10.985000 audit: BPF prog-id=48 op=LOAD Jan 15 00:28:10.985000 audit: BPF prog-id=29 op=UNLOAD Jan 15 00:28:10.985000 audit: BPF prog-id=49 op=LOAD Jan 15 00:28:10.985000 audit: BPF prog-id=50 op=LOAD Jan 15 00:28:10.985000 audit: BPF prog-id=30 op=UNLOAD Jan 15 00:28:10.985000 audit: BPF prog-id=31 op=UNLOAD Jan 15 00:28:11.038000 audit: BPF prog-id=51 op=LOAD Jan 15 00:28:11.038000 audit: BPF prog-id=35 op=UNLOAD Jan 15 00:28:11.038000 audit: BPF prog-id=52 op=LOAD Jan 15 00:28:11.038000 audit: BPF prog-id=53 op=LOAD Jan 15 00:28:11.038000 audit: BPF prog-id=36 op=UNLOAD Jan 15 00:28:11.038000 audit: BPF prog-id=37 op=UNLOAD Jan 15 00:28:11.192377 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:28:11.197418 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 00:28:11.208723 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 00:28:11.217709 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 00:28:11.224000 audit: BPF prog-id=8 op=UNLOAD Jan 15 00:28:11.224000 audit: BPF prog-id=7 op=UNLOAD Jan 15 00:28:11.226000 audit: BPF prog-id=54 op=LOAD Jan 15 00:28:11.226000 audit: BPF prog-id=55 op=LOAD Jan 15 00:28:11.229541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:28:11.240514 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 00:28:11.255118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:11.256060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:28:11.265710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:28:11.265000 audit[1481]: SYSTEM_BOOT pid=1481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 15 00:28:11.277497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:28:11.295136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:28:11.300919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:28:11.301390 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:28:11.301554 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:28:11.301705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:11.307576 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 00:28:11.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:11.316060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:28:11.316634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:28:11.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:11.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:11.324558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:28:11.324991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:28:11.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:11.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:11.335114 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:28:11.335809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:28:11.340000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 00:28:11.340000 audit[1498]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde2775dd0 a2=420 a3=0 items=0 ppid=1470 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:11.340000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:28:11.340858 augenrules[1498]: No rules Jan 15 00:28:11.341923 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:28:11.346289 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:28:11.364688 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 00:28:11.368281 systemd-udevd[1480]: Using default interface naming scheme 'v257'. Jan 15 00:28:11.373287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:11.373492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:28:11.375324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:28:11.381501 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:28:11.389141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:28:11.392922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:28:11.393152 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:28:11.393350 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:28:11.393466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:11.395997 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 00:28:11.401399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:28:11.401706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:28:11.406551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:28:11.406923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:28:11.412037 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:28:11.412366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:28:11.420897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:28:11.442418 systemd[1]: Finished ensure-sysext.service. Jan 15 00:28:11.445703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:11.449673 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:28:11.453012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:28:11.454956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:28:11.460004 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:28:11.465528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:28:11.470928 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:28:11.474596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:28:11.474721 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:28:11.474805 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:28:11.479503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:28:11.486315 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 00:28:11.489931 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 00:28:11.489962 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:28:11.509596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:28:11.514160 augenrules[1530]: /sbin/augenrules: No change Jan 15 00:28:11.520069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:28:11.524572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:28:11.526369 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:28:11.533140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:28:11.533552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:28:11.540610 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:28:11.540976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:28:11.556652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:28:11.556792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:28:11.566444 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 15 00:28:11.566488 kernel: audit: type=1305 audit(1768436891.563:213): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:28:11.563000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:28:11.565634 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:28:11.567118 augenrules[1564]: No rules Jan 15 00:28:11.566883 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:28:11.593033 kernel: audit: type=1300 audit(1768436891.563:213): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ff2c9e0 a2=420 a3=0 items=0 ppid=1530 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:11.593085 kernel: audit: type=1327 audit(1768436891.563:213): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:28:11.593163 kernel: audit: type=1305 audit(1768436891.563:214): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 00:28:11.563000 audit[1564]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ff2c9e0 a2=420 a3=0 items=0 ppid=1530 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:11.563000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:28:11.563000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 00:28:11.630560 kernel: audit: type=1300 audit(1768436891.563:214): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ff2ee70 a2=420 a3=0 items=0 ppid=1530 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:11.630679 kernel: audit: type=1327 audit(1768436891.563:214): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:28:11.563000 audit[1564]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ff2ee70 a2=420 a3=0 items=0 ppid=1530 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:11.563000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:28:11.695066 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 00:28:11.700269 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 00:28:11.720727 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 00:28:11.732498 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:28:11.734016 systemd-networkd[1541]: lo: Link UP Jan 15 00:28:11.734025 systemd-networkd[1541]: lo: Gained carrier Jan 15 00:28:11.738647 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:28:11.754731 systemd[1]: Reached target network.target - Network. Jan 15 00:28:11.757426 systemd-networkd[1541]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:28:11.757464 systemd-networkd[1541]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 00:28:11.760870 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 00:28:11.761504 systemd-networkd[1541]: eth0: Link UP Jan 15 00:28:11.761704 systemd-networkd[1541]: eth0: Gained carrier Jan 15 00:28:11.761721 systemd-networkd[1541]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:28:11.767370 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 00:28:11.772817 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 00:28:11.787301 systemd-networkd[1541]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 00:28:11.788245 systemd-timesyncd[1546]: Network configuration changed, trying to establish connection. Jan 15 00:28:13.078903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 00:28:13.078480 systemd-timesyncd[1546]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 15 00:28:13.078670 systemd-timesyncd[1546]: Initial clock synchronization to Thu 2026-01-15 00:28:13.078208 UTC. Jan 15 00:28:13.079372 systemd-resolved[1316]: Clock change detected. Flushing caches. Jan 15 00:28:13.086871 kernel: ACPI: button: Power Button [PWRF] Jan 15 00:28:13.090923 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 00:28:13.271475 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 00:28:13.289542 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 00:28:13.307830 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 00:28:13.315207 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 00:28:13.408908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:28:13.904571 ldconfig[1472]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 00:28:13.920932 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 00:28:13.931495 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 00:28:14.212009 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 00:28:14.227401 kernel: kvm_amd: TSC scaling supported Jan 15 00:28:14.227726 kernel: kvm_amd: Nested Virtualization enabled Jan 15 00:28:14.228227 kernel: kvm_amd: Nested Paging enabled Jan 15 00:28:14.228529 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 15 00:28:14.228717 kernel: kvm_amd: PMU virtualization is disabled Jan 15 00:28:14.286846 kernel: EDAC MC: Ver: 3.0.0 Jan 15 00:28:14.318310 systemd-networkd[1541]: eth0: Gained IPv6LL Jan 15 00:28:14.359706 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 00:28:14.363062 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 00:28:14.374140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:28:14.380760 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:28:14.384742 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 00:28:14.389115 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 00:28:14.393717 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 15 00:28:14.398171 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 00:28:14.403464 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 00:28:14.409141 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 15 00:28:14.419959 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 15 00:28:14.423502 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 00:28:14.427594 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 00:28:14.427659 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:28:14.430567 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:28:14.436611 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 00:28:14.444500 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 00:28:14.453589 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 00:28:14.459317 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 00:28:14.464020 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 00:28:14.471328 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 00:28:14.475670 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 00:28:14.480657 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 00:28:14.485468 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:28:14.488934 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:28:14.492435 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:28:14.492519 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:28:14.494396 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 00:28:14.500064 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 15 00:28:14.518432 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 00:28:14.525699 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 00:28:14.533603 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 00:28:14.540765 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 00:28:14.545867 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 00:28:14.549013 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 15 00:28:14.553629 jq[1622]: false Jan 15 00:28:14.562197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:28:14.568948 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 00:28:14.576628 google_oslogin_nss_cache[1624]: oslogin_cache_refresh[1624]: Refreshing passwd entry cache Jan 15 00:28:14.576641 oslogin_cache_refresh[1624]: Refreshing passwd entry cache Jan 15 00:28:14.577546 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 00:28:14.582380 extend-filesystems[1623]: Found /dev/vda6 Jan 15 00:28:14.584493 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 00:28:14.595387 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 00:28:14.601543 extend-filesystems[1623]: Found /dev/vda9 Jan 15 00:28:14.605469 extend-filesystems[1623]: Checking size of /dev/vda9 Jan 15 00:28:14.601627 oslogin_cache_refresh[1624]: Failure getting users, quitting Jan 15 00:28:14.610512 google_oslogin_nss_cache[1624]: oslogin_cache_refresh[1624]: Failure getting users, quitting Jan 15 00:28:14.610512 google_oslogin_nss_cache[1624]: oslogin_cache_refresh[1624]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:28:14.610512 google_oslogin_nss_cache[1624]: oslogin_cache_refresh[1624]: Refreshing group entry cache Jan 15 00:28:14.601654 oslogin_cache_refresh[1624]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:28:14.601763 oslogin_cache_refresh[1624]: Refreshing group entry cache Jan 15 00:28:14.616605 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 00:28:14.623924 google_oslogin_nss_cache[1624]: oslogin_cache_refresh[1624]: Failure getting groups, quitting Jan 15 00:28:14.623924 google_oslogin_nss_cache[1624]: oslogin_cache_refresh[1624]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:28:14.624010 extend-filesystems[1623]: Resized partition /dev/vda9 Jan 15 00:28:14.621997 oslogin_cache_refresh[1624]: Failure getting groups, quitting Jan 15 00:28:14.622015 oslogin_cache_refresh[1624]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:28:14.632053 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 00:28:14.636142 extend-filesystems[1645]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 00:28:14.638393 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 00:28:14.643224 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 00:28:14.646007 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 15 00:28:14.645988 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 00:28:14.654478 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 00:28:14.667677 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 00:28:14.672721 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 00:28:14.673140 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 00:28:14.673562 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 15 00:28:14.673925 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 15 00:28:14.681463 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 00:28:14.682940 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 00:28:14.692441 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 00:28:14.694677 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 00:28:14.715073 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 15 00:28:14.790428 extend-filesystems[1645]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 00:28:14.790428 extend-filesystems[1645]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 15 00:28:14.790428 extend-filesystems[1645]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 15 00:28:14.897245 jq[1652]: true Jan 15 00:28:14.897964 extend-filesystems[1623]: Resized filesystem in /dev/vda9 Jan 15 00:28:14.897632 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 00:28:14.899057 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 00:28:15.048137 tar[1662]: linux-amd64/LICENSE Jan 15 00:28:15.054892 tar[1662]: linux-amd64/helm Jan 15 00:28:15.059016 jq[1679]: true Jan 15 00:28:15.059315 update_engine[1649]: I20260115 00:28:15.058876 1649 main.cc:92] Flatcar Update Engine starting Jan 15 00:28:15.088532 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 00:28:15.133468 systemd-logind[1646]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 00:28:15.134559 dbus-daemon[1620]: [system] SELinux support is enabled Jan 15 00:28:15.133502 systemd-logind[1646]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 00:28:15.134128 systemd-logind[1646]: New seat seat0. Jan 15 00:28:15.135006 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 00:28:15.148248 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 00:28:15.148330 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 00:28:15.153317 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 00:28:15.153403 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 00:28:15.158134 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 00:28:15.166688 update_engine[1649]: I20260115 00:28:15.163655 1649 update_check_scheduler.cc:74] Next update check in 4m18s Jan 15 00:28:15.167624 systemd[1]: Started update-engine.service - Update Engine. Jan 15 00:28:15.176612 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 15 00:28:15.177394 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 15 00:28:15.186912 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 00:28:15.195300 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 00:28:15.249993 bash[1705]: Updated "/home/core/.ssh/authorized_keys" Jan 15 00:28:15.252504 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 00:28:15.260665 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 00:28:15.605877 sshd_keygen[1656]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 00:28:15.854990 locksmithd[1706]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 00:28:15.887303 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 00:28:15.898445 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 00:28:16.140994 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 00:28:16.141441 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 00:28:16.155577 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 00:28:16.206641 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 00:28:16.217482 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 00:28:16.441407 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 00:28:16.449646 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 00:28:16.849985 containerd[1681]: time="2026-01-15T00:28:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 00:28:16.851867 containerd[1681]: time="2026-01-15T00:28:16.851074191Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 15 00:28:16.893499 containerd[1681]: time="2026-01-15T00:28:16.893446655Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="101.63µs" Jan 15 00:28:16.893739 containerd[1681]: time="2026-01-15T00:28:16.893713503Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 00:28:16.893933 containerd[1681]: time="2026-01-15T00:28:16.893914949Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 00:28:16.893994 containerd[1681]: time="2026-01-15T00:28:16.893980912Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 00:28:16.894462 containerd[1681]: time="2026-01-15T00:28:16.894439638Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 00:28:16.894541 containerd[1681]: time="2026-01-15T00:28:16.894525919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:28:16.894855 containerd[1681]: time="2026-01-15T00:28:16.894749016Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:28:16.894920 containerd[1681]: time="2026-01-15T00:28:16.894905819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:28:16.895317 containerd[1681]: time="2026-01-15T00:28:16.895294154Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:28:16.895432 containerd[1681]: time="2026-01-15T00:28:16.895416903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:28:16.895486 containerd[1681]: time="2026-01-15T00:28:16.895473048Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:28:16.895550 containerd[1681]: time="2026-01-15T00:28:16.895532438Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:28:16.896097 containerd[1681]: time="2026-01-15T00:28:16.896073028Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:28:16.896211 containerd[1681]: time="2026-01-15T00:28:16.896192341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 00:28:16.896552 containerd[1681]: time="2026-01-15T00:28:16.896525402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 00:28:16.897089 containerd[1681]: time="2026-01-15T00:28:16.897068176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:28:16.897212 containerd[1681]: time="2026-01-15T00:28:16.897194121Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:28:16.897300 containerd[1681]: time="2026-01-15T00:28:16.897287285Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 00:28:16.897502 containerd[1681]: time="2026-01-15T00:28:16.897482490Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 00:28:16.898171 containerd[1681]: time="2026-01-15T00:28:16.898150897Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 00:28:16.901953 containerd[1681]: time="2026-01-15T00:28:16.898324782Z" level=info msg="metadata content store policy set" policy=shared Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.005603875Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.006768860Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.007016883Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.007057319Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.007073589Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.007107973Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.007132740Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 00:28:17.006491 containerd[1681]: time="2026-01-15T00:28:17.007142157Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007154390Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007208240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007231494Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007287518Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007302506Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007317474Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007624227Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007647209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007689389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007775279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007808561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007818750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007830903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007861760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 00:28:17.008301 containerd[1681]: time="2026-01-15T00:28:17.007873603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 00:28:17.009756 containerd[1681]: time="2026-01-15T00:28:17.007883271Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 00:28:17.009756 containerd[1681]: time="2026-01-15T00:28:17.007892938Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 00:28:17.009756 containerd[1681]: time="2026-01-15T00:28:17.007948722Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 00:28:17.009756 containerd[1681]: time="2026-01-15T00:28:17.008320527Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 00:28:17.009756 containerd[1681]: time="2026-01-15T00:28:17.008338610Z" level=info msg="Start snapshots syncer" Jan 15 00:28:17.009756 containerd[1681]: time="2026-01-15T00:28:17.008490133Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 00:28:17.018261 containerd[1681]: time="2026-01-15T00:28:17.015862086Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 00:28:17.018261 containerd[1681]: time="2026-01-15T00:28:17.016202352Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016332124Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016579956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016634819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016651530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016663062Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016677669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016701784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016723394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016744654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016769110Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.016964965Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.017000161Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:28:17.018907 containerd[1681]: time="2026-01-15T00:28:17.017015279Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017038483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017059562Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017077155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017137728Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017383357Z" level=info msg="runtime interface created" Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017401019Z" level=info msg="created NRI interface" Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017422790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017483584Z" level=info msg="Connect containerd service" Jan 15 00:28:17.019186 containerd[1681]: time="2026-01-15T00:28:17.017596143Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 00:28:17.021850 containerd[1681]: time="2026-01-15T00:28:17.021151411Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 00:28:17.105907 tar[1662]: linux-amd64/README.md Jan 15 00:28:17.295633 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 00:28:18.132910 containerd[1681]: time="2026-01-15T00:28:18.132171210Z" level=info msg="Start subscribing containerd event" Jan 15 00:28:18.132910 containerd[1681]: time="2026-01-15T00:28:18.132700358Z" level=info msg="Start recovering state" Jan 15 00:28:18.134939 containerd[1681]: time="2026-01-15T00:28:18.133749437Z" level=info msg="Start event monitor" Jan 15 00:28:18.134939 containerd[1681]: time="2026-01-15T00:28:18.133960742Z" level=info msg="Start cni network conf syncer for default" Jan 15 00:28:18.134939 containerd[1681]: time="2026-01-15T00:28:18.134139706Z" level=info msg="Start streaming server" Jan 15 00:28:18.134939 containerd[1681]: time="2026-01-15T00:28:18.134225275Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 00:28:18.134939 containerd[1681]: time="2026-01-15T00:28:18.134269648Z" level=info msg="runtime interface starting up..." Jan 15 00:28:18.134939 containerd[1681]: time="2026-01-15T00:28:18.134311997Z" level=info msg="starting plugins..." Jan 15 00:28:18.134939 containerd[1681]: time="2026-01-15T00:28:18.134400473Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 00:28:18.138427 containerd[1681]: time="2026-01-15T00:28:18.135910371Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 00:28:18.138427 containerd[1681]: time="2026-01-15T00:28:18.136164345Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 00:28:18.138427 containerd[1681]: time="2026-01-15T00:28:18.136487639Z" level=info msg="containerd successfully booted in 1.288789s" Jan 15 00:28:18.138175 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 00:28:20.005221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:28:20.010218 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 00:28:20.017580 systemd[1]: Startup finished in 4.758s (kernel) + 11.499s (initrd) + 11.862s (userspace) = 28.119s. Jan 15 00:28:20.041660 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:28:21.777523 kubelet[1757]: E0115 00:28:21.776961 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:28:21.782207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:28:21.782576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:28:21.783613 systemd[1]: kubelet.service: Consumed 5.083s CPU time, 266.7M memory peak. Jan 15 00:28:23.298921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 00:28:23.301520 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). Jan 15 00:28:23.418748 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:28:23.421455 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:28:23.437594 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 00:28:23.439871 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 00:28:23.448480 systemd-logind[1646]: New session 1 of user core. Jan 15 00:28:23.471600 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 00:28:23.477240 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 00:28:23.501213 (systemd)[1775]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 00:28:23.507352 systemd-logind[1646]: New session c1 of user core. Jan 15 00:28:23.733752 systemd[1775]: Queued start job for default target default.target. Jan 15 00:28:23.753745 systemd[1775]: Created slice app.slice - User Application Slice. Jan 15 00:28:23.753960 systemd[1775]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 15 00:28:23.753986 systemd[1775]: Reached target paths.target - Paths. Jan 15 00:28:23.754133 systemd[1775]: Reached target timers.target - Timers. Jan 15 00:28:23.757453 systemd[1775]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 00:28:23.759257 systemd[1775]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 15 00:28:23.775669 systemd[1775]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 00:28:23.775866 systemd[1775]: Reached target sockets.target - Sockets. Jan 15 00:28:23.783359 systemd[1775]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 15 00:28:23.783632 systemd[1775]: Reached target basic.target - Basic System. Jan 15 00:28:23.783870 systemd[1775]: Reached target default.target - Main User Target. Jan 15 00:28:23.783972 systemd[1775]: Startup finished in 266ms. Jan 15 00:28:23.784088 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 00:28:23.794082 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 00:28:23.825651 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:34984.service - OpenSSH per-connection server daemon (10.0.0.1:34984). Jan 15 00:28:23.916722 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 34984 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:28:23.919177 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:28:23.928626 systemd-logind[1646]: New session 2 of user core. Jan 15 00:28:23.939003 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 00:28:23.962083 sshd[1791]: Connection closed by 10.0.0.1 port 34984 Jan 15 00:28:23.963014 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 15 00:28:23.973928 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:34984.service: Deactivated successfully. Jan 15 00:28:23.976175 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 00:28:23.977973 systemd-logind[1646]: Session 2 logged out. Waiting for processes to exit. Jan 15 00:28:23.980583 systemd-logind[1646]: Removed session 2. Jan 15 00:28:23.982172 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:34994.service - OpenSSH per-connection server daemon (10.0.0.1:34994). Jan 15 00:28:24.054576 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 34994 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:28:24.056351 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:28:24.063578 systemd-logind[1646]: New session 3 of user core. Jan 15 00:28:24.080147 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 00:28:24.094540 sshd[1800]: Connection closed by 10.0.0.1 port 34994 Jan 15 00:28:24.094839 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jan 15 00:28:24.109738 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:34994.service: Deactivated successfully. Jan 15 00:28:24.112122 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 00:28:24.113346 systemd-logind[1646]: Session 3 logged out. Waiting for processes to exit. Jan 15 00:28:24.115551 systemd-logind[1646]: Removed session 3. Jan 15 00:28:24.117335 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:34998.service - OpenSSH per-connection server daemon (10.0.0.1:34998). Jan 15 00:28:24.201125 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 34998 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:28:24.203311 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:28:24.211946 systemd-logind[1646]: New session 4 of user core. Jan 15 00:28:24.221198 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 00:28:24.260850 sshd[1809]: Connection closed by 10.0.0.1 port 34998 Jan 15 00:28:24.256214 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 15 00:28:24.281183 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:34998.service: Deactivated successfully. Jan 15 00:28:24.283681 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 00:28:24.285113 systemd-logind[1646]: Session 4 logged out. Waiting for processes to exit. Jan 15 00:28:24.288688 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:35014.service - OpenSSH per-connection server daemon (10.0.0.1:35014). Jan 15 00:28:24.290877 systemd-logind[1646]: Removed session 4. Jan 15 00:28:24.394077 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 35014 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:28:24.396613 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:28:24.406345 systemd-logind[1646]: New session 5 of user core. Jan 15 00:28:24.420304 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 00:28:24.458022 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 00:28:24.458599 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:28:24.489999 sudo[1819]: pam_unix(sudo:session): session closed for user root Jan 15 00:28:24.493085 sshd[1818]: Connection closed by 10.0.0.1 port 35014 Jan 15 00:28:24.493734 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 15 00:28:24.531569 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:35014.service: Deactivated successfully. Jan 15 00:28:24.535256 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 00:28:24.537571 systemd-logind[1646]: Session 5 logged out. Waiting for processes to exit. Jan 15 00:28:24.542655 systemd[1]: Started sshd@5-10.0.0.47:22-10.0.0.1:35024.service - OpenSSH per-connection server daemon (10.0.0.1:35024). Jan 15 00:28:24.544054 systemd-logind[1646]: Removed session 5. Jan 15 00:28:24.629423 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 35024 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:28:24.631685 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:28:24.639715 systemd-logind[1646]: New session 6 of user core. Jan 15 00:28:24.654106 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 00:28:24.677363 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 00:28:24.678040 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:28:24.688163 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 15 00:28:24.701065 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 00:28:24.701637 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:28:24.716875 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:28:24.787000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:28:24.789243 augenrules[1852]: No rules Jan 15 00:28:24.790229 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:28:24.790622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:28:24.793376 sudo[1829]: pam_unix(sudo:session): session closed for user root Jan 15 00:28:24.795962 sshd[1828]: Connection closed by 10.0.0.1 port 35024 Jan 15 00:28:24.787000 audit[1852]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfa8bab40 a2=420 a3=0 items=0 ppid=1833 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:24.798059 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jan 15 00:28:24.809893 kernel: audit: type=1305 audit(1768436904.787:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:28:24.809963 kernel: audit: type=1300 audit(1768436904.787:215): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfa8bab40 a2=420 a3=0 items=0 ppid=1833 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:24.809982 kernel: audit: type=1327 audit(1768436904.787:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:28:24.787000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:28:24.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.823937 kernel: audit: type=1130 audit(1768436904.790:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.824019 kernel: audit: type=1131 audit(1768436904.790:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.792000 audit[1829]: USER_END pid=1829 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.841921 kernel: audit: type=1106 audit(1768436904.792:218): pid=1829 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.842043 kernel: audit: type=1104 audit(1768436904.792:219): pid=1829 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.792000 audit[1829]: CRED_DISP pid=1829 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.799000 audit[1825]: USER_END pid=1825 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.862424 kernel: audit: type=1106 audit(1768436904.799:220): pid=1825 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.862484 kernel: audit: type=1104 audit(1768436904.799:221): pid=1825 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.799000 audit[1825]: CRED_DISP pid=1825 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.878665 systemd[1]: sshd@5-10.0.0.47:22-10.0.0.1:35024.service: Deactivated successfully. Jan 15 00:28:24.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.47:22-10.0.0.1:35024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.881467 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 00:28:24.883025 systemd-logind[1646]: Session 6 logged out. Waiting for processes to exit. Jan 15 00:28:24.886693 systemd[1]: Started sshd@6-10.0.0.47:22-10.0.0.1:35040.service - OpenSSH per-connection server daemon (10.0.0.1:35040). Jan 15 00:28:24.887732 systemd-logind[1646]: Removed session 6. Jan 15 00:28:24.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.47:22-10.0.0.1:35040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.892869 kernel: audit: type=1131 audit(1768436904.878:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.47:22-10.0.0.1:35024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.935000 audit[1861]: USER_ACCT pid=1861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.936589 sshd[1861]: Accepted publickey for core from 10.0.0.1 port 35040 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:28:24.936000 audit[1861]: CRED_ACQ pid=1861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.936000 audit[1861]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3787d410 a2=3 a3=0 items=0 ppid=1 pid=1861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:24.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:28:24.938253 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:28:24.945731 systemd-logind[1646]: New session 7 of user core. Jan 15 00:28:24.956166 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 00:28:24.958000 audit[1861]: USER_START pid=1861 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.961000 audit[1864]: CRED_ACQ pid=1864 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:28:24.975000 audit[1865]: USER_ACCT pid=1865 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.976574 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 00:28:24.975000 audit[1865]: CRED_REFR pid=1865 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:28:24.977024 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:28:24.979000 audit[1865]: USER_START pid=1865 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:28:25.370952 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 00:28:25.389204 (dockerd)[1886]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 00:28:25.677382 dockerd[1886]: time="2026-01-15T00:28:25.677137923Z" level=info msg="Starting up" Jan 15 00:28:25.679732 dockerd[1886]: time="2026-01-15T00:28:25.679554698Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 00:28:25.711381 dockerd[1886]: time="2026-01-15T00:28:25.711261059Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 00:28:25.757536 systemd[1]: var-lib-docker-metacopy\x2dcheck3607575526-merged.mount: Deactivated successfully. Jan 15 00:28:25.787117 dockerd[1886]: time="2026-01-15T00:28:25.787020050Z" level=info msg="Loading containers: start." Jan 15 00:28:25.803931 kernel: Initializing XFRM netlink socket Jan 15 00:28:25.908000 audit[1940]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.908000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc0643a900 a2=0 a3=0 items=0 ppid=1886 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 00:28:25.914000 audit[1942]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.914000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd68075020 a2=0 a3=0 items=0 ppid=1886 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 00:28:25.919000 audit[1944]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.919000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb6d77c10 a2=0 a3=0 items=0 ppid=1886 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.919000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 00:28:25.924000 audit[1946]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.924000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda71a3870 a2=0 a3=0 items=0 ppid=1886 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.924000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 00:28:25.929000 audit[1948]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.929000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd25425640 a2=0 a3=0 items=0 ppid=1886 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.929000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 00:28:25.935000 audit[1950]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.935000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd74af9260 a2=0 a3=0 items=0 ppid=1886 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:28:25.940000 audit[1952]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.940000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffac80b280 a2=0 a3=0 items=0 ppid=1886 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:28:25.946000 audit[1954]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:25.946000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd156ed6e0 a2=0 a3=0 items=0 ppid=1886 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:25.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 00:28:26.005000 audit[1957]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.005000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffdb41d020 a2=0 a3=0 items=0 ppid=1886 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.005000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 15 00:28:26.010000 audit[1959]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.010000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd78d85ad0 a2=0 a3=0 items=0 ppid=1886 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 00:28:26.015000 audit[1961]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.015000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd7ef01f30 a2=0 a3=0 items=0 ppid=1886 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 00:28:26.019000 audit[1963]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.019000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcf4496710 a2=0 a3=0 items=0 ppid=1886 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.019000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:28:26.024000 audit[1965]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.024000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe2409d4e0 a2=0 a3=0 items=0 ppid=1886 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.024000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 00:28:26.102000 audit[1995]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.102000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc13562440 a2=0 a3=0 items=0 ppid=1886 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 00:28:26.106000 audit[1997]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.106000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffea07abf0 a2=0 a3=0 items=0 ppid=1886 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 00:28:26.110000 audit[1999]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.110000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc86d3f010 a2=0 a3=0 items=0 ppid=1886 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 00:28:26.114000 audit[2001]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.114000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcacf523a0 a2=0 a3=0 items=0 ppid=1886 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 00:28:26.119000 audit[2003]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.119000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb400de30 a2=0 a3=0 items=0 ppid=1886 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.119000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 00:28:26.123000 audit[2005]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.123000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc44e112e0 a2=0 a3=0 items=0 ppid=1886 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.123000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:28:26.128000 audit[2007]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.128000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd4a9aad20 a2=0 a3=0 items=0 ppid=1886 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:28:26.133000 audit[2009]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.133000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd63089de0 a2=0 a3=0 items=0 ppid=1886 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 00:28:26.138000 audit[2011]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.138000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe88fca740 a2=0 a3=0 items=0 ppid=1886 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 15 00:28:26.143000 audit[2013]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.143000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff334953e0 a2=0 a3=0 items=0 ppid=1886 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 00:28:26.148000 audit[2015]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.148000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffeb53234b0 a2=0 a3=0 items=0 ppid=1886 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 00:28:26.153000 audit[2017]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.153000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff61c8b990 a2=0 a3=0 items=0 ppid=1886 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:28:26.159000 audit[2019]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.159000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffce358910 a2=0 a3=0 items=0 ppid=1886 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 00:28:26.170000 audit[2024]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.170000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6ef6efc0 a2=0 a3=0 items=0 ppid=1886 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 00:28:26.175000 audit[2026]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.175000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffbafcaa80 a2=0 a3=0 items=0 ppid=1886 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 00:28:26.180000 audit[2028]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.180000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc963efc30 a2=0 a3=0 items=0 ppid=1886 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 00:28:26.184000 audit[2030]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.184000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe91f555c0 a2=0 a3=0 items=0 ppid=1886 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 00:28:26.188000 audit[2032]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.188000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd73bf6b10 a2=0 a3=0 items=0 ppid=1886 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.188000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 00:28:26.193000 audit[2034]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:26.193000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce81ded90 a2=0 a3=0 items=0 ppid=1886 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 00:28:26.217000 audit[2038]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.217000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd7d628660 a2=0 a3=0 items=0 ppid=1886 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.217000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 15 00:28:26.222000 audit[2040]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.222000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffee1f8220 a2=0 a3=0 items=0 ppid=1886 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 15 00:28:26.240000 audit[2048]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.240000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffebeeb6d50 a2=0 a3=0 items=0 ppid=1886 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 15 00:28:26.257000 audit[2054]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.257000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffee16115f0 a2=0 a3=0 items=0 ppid=1886 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 15 00:28:26.263000 audit[2056]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.263000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffeba772980 a2=0 a3=0 items=0 ppid=1886 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 15 00:28:26.268000 audit[2058]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.268000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffed618dde0 a2=0 a3=0 items=0 ppid=1886 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 15 00:28:26.272000 audit[2060]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.272000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe9ebab130 a2=0 a3=0 items=0 ppid=1886 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:28:26.276000 audit[2062]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:26.276000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe78191ea0 a2=0 a3=0 items=0 ppid=1886 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:26.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 15 00:28:26.278497 systemd-networkd[1541]: docker0: Link UP Jan 15 00:28:26.285678 dockerd[1886]: time="2026-01-15T00:28:26.285608971Z" level=info msg="Loading containers: done." Jan 15 00:28:26.305376 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck901484718-merged.mount: Deactivated successfully. Jan 15 00:28:26.310193 dockerd[1886]: time="2026-01-15T00:28:26.310020940Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 00:28:26.310193 dockerd[1886]: time="2026-01-15T00:28:26.310139432Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 00:28:26.310368 dockerd[1886]: time="2026-01-15T00:28:26.310241382Z" level=info msg="Initializing buildkit" Jan 15 00:28:26.364517 dockerd[1886]: time="2026-01-15T00:28:26.364372683Z" level=info msg="Completed buildkit initialization" Jan 15 00:28:26.371251 dockerd[1886]: time="2026-01-15T00:28:26.371084243Z" level=info msg="Daemon has completed initialization" Jan 15 00:28:26.371251 dockerd[1886]: time="2026-01-15T00:28:26.371222179Z" level=info msg="API listen on /run/docker.sock" Jan 15 00:28:26.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:26.371940 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 00:28:27.128632 containerd[1681]: time="2026-01-15T00:28:27.128492577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 00:28:27.711305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860469339.mount: Deactivated successfully. Jan 15 00:28:29.063652 containerd[1681]: time="2026-01-15T00:28:29.063565238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:29.064858 containerd[1681]: time="2026-01-15T00:28:29.064762548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 15 00:28:29.067000 containerd[1681]: time="2026-01-15T00:28:29.066752434Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:29.071200 containerd[1681]: time="2026-01-15T00:28:29.071123013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:29.072660 containerd[1681]: time="2026-01-15T00:28:29.072564033Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.943968064s" Jan 15 00:28:29.072660 containerd[1681]: time="2026-01-15T00:28:29.072648882Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 15 00:28:29.074035 containerd[1681]: time="2026-01-15T00:28:29.073986810Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 00:28:30.761518 containerd[1681]: time="2026-01-15T00:28:30.761352784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:30.762650 containerd[1681]: time="2026-01-15T00:28:30.762593616Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 15 00:28:30.764072 containerd[1681]: time="2026-01-15T00:28:30.763975686Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:30.766985 containerd[1681]: time="2026-01-15T00:28:30.766915039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:30.768175 containerd[1681]: time="2026-01-15T00:28:30.768101252Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.694078956s" Jan 15 00:28:30.768175 containerd[1681]: time="2026-01-15T00:28:30.768170462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 15 00:28:30.768845 containerd[1681]: time="2026-01-15T00:28:30.768721821Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 00:28:32.032511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 00:28:32.035087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:28:32.451229 containerd[1681]: time="2026-01-15T00:28:32.450998754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:32.452986 containerd[1681]: time="2026-01-15T00:28:32.452952056Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 15 00:28:32.454938 containerd[1681]: time="2026-01-15T00:28:32.454880238Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:32.458989 containerd[1681]: time="2026-01-15T00:28:32.458917054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:32.461274 containerd[1681]: time="2026-01-15T00:28:32.461188583Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.69232053s" Jan 15 00:28:32.461274 containerd[1681]: time="2026-01-15T00:28:32.461249757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 15 00:28:32.462318 containerd[1681]: time="2026-01-15T00:28:32.462214540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 00:28:32.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:32.770837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:28:32.773843 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 15 00:28:32.773907 kernel: audit: type=1130 audit(1768436912.770:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:32.797270 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:28:33.079565 kubelet[2180]: E0115 00:28:33.079306 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:28:33.085044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:28:33.085279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:28:33.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:28:33.085853 systemd[1]: kubelet.service: Consumed 963ms CPU time, 109.7M memory peak. Jan 15 00:28:33.093910 kernel: audit: type=1131 audit(1768436913.084:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:28:33.492309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057710075.mount: Deactivated successfully. Jan 15 00:28:34.015424 containerd[1681]: time="2026-01-15T00:28:34.015276417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:34.016555 containerd[1681]: time="2026-01-15T00:28:34.016420513Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 15 00:28:34.017862 containerd[1681]: time="2026-01-15T00:28:34.017739886Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:34.019995 containerd[1681]: time="2026-01-15T00:28:34.019925366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:34.020635 containerd[1681]: time="2026-01-15T00:28:34.020520946Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.558255049s" Jan 15 00:28:34.020635 containerd[1681]: time="2026-01-15T00:28:34.020607458Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 15 00:28:34.021202 containerd[1681]: time="2026-01-15T00:28:34.021114295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 00:28:34.491417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209812558.mount: Deactivated successfully. Jan 15 00:28:35.558277 containerd[1681]: time="2026-01-15T00:28:35.558161539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:35.558982 containerd[1681]: time="2026-01-15T00:28:35.558954111Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 15 00:28:35.560306 containerd[1681]: time="2026-01-15T00:28:35.560141012Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:35.563264 containerd[1681]: time="2026-01-15T00:28:35.563146786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:35.564112 containerd[1681]: time="2026-01-15T00:28:35.564033589Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.542864482s" Jan 15 00:28:35.564112 containerd[1681]: time="2026-01-15T00:28:35.564084305Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 15 00:28:35.564999 containerd[1681]: time="2026-01-15T00:28:35.564907581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 00:28:35.930373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652023065.mount: Deactivated successfully. Jan 15 00:28:35.937233 containerd[1681]: time="2026-01-15T00:28:35.937132442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:28:35.939354 containerd[1681]: time="2026-01-15T00:28:35.939267748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 00:28:35.940660 containerd[1681]: time="2026-01-15T00:28:35.940605985Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:28:35.943739 containerd[1681]: time="2026-01-15T00:28:35.943577542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:28:35.944290 containerd[1681]: time="2026-01-15T00:28:35.944147498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 379.174516ms" Jan 15 00:28:35.944290 containerd[1681]: time="2026-01-15T00:28:35.944177976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 15 00:28:35.945029 containerd[1681]: time="2026-01-15T00:28:35.944923407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 00:28:36.409955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991851414.mount: Deactivated successfully. Jan 15 00:28:38.745377 containerd[1681]: time="2026-01-15T00:28:38.745265772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:38.746951 containerd[1681]: time="2026-01-15T00:28:38.746759230Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 15 00:28:38.748440 containerd[1681]: time="2026-01-15T00:28:38.748350440Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:38.751689 containerd[1681]: time="2026-01-15T00:28:38.751438006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:28:38.752432 containerd[1681]: time="2026-01-15T00:28:38.752339288Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.807359635s" Jan 15 00:28:38.752432 containerd[1681]: time="2026-01-15T00:28:38.752396174Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 15 00:28:41.107422 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:28:41.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:41.108135 systemd[1]: kubelet.service: Consumed 963ms CPU time, 109.7M memory peak. Jan 15 00:28:41.112120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:28:41.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:41.129987 kernel: audit: type=1130 audit(1768436921.107:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:41.130076 kernel: audit: type=1131 audit(1768436921.107:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:41.148311 systemd[1]: Reload requested from client PID 2336 ('systemctl') (unit session-7.scope)... Jan 15 00:28:41.148368 systemd[1]: Reloading... Jan 15 00:28:41.253921 zram_generator::config[2379]: No configuration found. Jan 15 00:28:41.524648 systemd[1]: Reloading finished in 375 ms. Jan 15 00:28:41.560000 audit: BPF prog-id=61 op=LOAD Jan 15 00:28:41.564884 kernel: audit: type=1334 audit(1768436921.560:277): prog-id=61 op=LOAD Jan 15 00:28:41.564953 kernel: audit: type=1334 audit(1768436921.561:278): prog-id=56 op=UNLOAD Jan 15 00:28:41.561000 audit: BPF prog-id=56 op=UNLOAD Jan 15 00:28:41.562000 audit: BPF prog-id=62 op=LOAD Jan 15 00:28:41.570892 kernel: audit: type=1334 audit(1768436921.562:279): prog-id=62 op=LOAD Jan 15 00:28:41.570944 kernel: audit: type=1334 audit(1768436921.562:280): prog-id=44 op=UNLOAD Jan 15 00:28:41.562000 audit: BPF prog-id=44 op=UNLOAD Jan 15 00:28:41.573865 kernel: audit: type=1334 audit(1768436921.562:281): prog-id=63 op=LOAD Jan 15 00:28:41.562000 audit: BPF prog-id=63 op=LOAD Jan 15 00:28:41.576719 kernel: audit: type=1334 audit(1768436921.562:282): prog-id=64 op=LOAD Jan 15 00:28:41.562000 audit: BPF prog-id=64 op=LOAD Jan 15 00:28:41.562000 audit: BPF prog-id=45 op=UNLOAD Jan 15 00:28:41.583083 kernel: audit: type=1334 audit(1768436921.562:283): prog-id=45 op=UNLOAD Jan 15 00:28:41.583133 kernel: audit: type=1334 audit(1768436921.562:284): prog-id=46 op=UNLOAD Jan 15 00:28:41.562000 audit: BPF prog-id=46 op=UNLOAD Jan 15 00:28:41.564000 audit: BPF prog-id=65 op=LOAD Jan 15 00:28:41.564000 audit: BPF prog-id=47 op=UNLOAD Jan 15 00:28:41.567000 audit: BPF prog-id=66 op=LOAD Jan 15 00:28:41.567000 audit: BPF prog-id=41 op=UNLOAD Jan 15 00:28:41.567000 audit: BPF prog-id=67 op=LOAD Jan 15 00:28:41.567000 audit: BPF prog-id=68 op=LOAD Jan 15 00:28:41.567000 audit: BPF prog-id=42 op=UNLOAD Jan 15 00:28:41.567000 audit: BPF prog-id=43 op=UNLOAD Jan 15 00:28:41.596000 audit: BPF prog-id=69 op=LOAD Jan 15 00:28:41.596000 audit: BPF prog-id=70 op=LOAD Jan 15 00:28:41.596000 audit: BPF prog-id=54 op=UNLOAD Jan 15 00:28:41.596000 audit: BPF prog-id=55 op=UNLOAD Jan 15 00:28:41.599000 audit: BPF prog-id=71 op=LOAD Jan 15 00:28:41.599000 audit: BPF prog-id=58 op=UNLOAD Jan 15 00:28:41.599000 audit: BPF prog-id=72 op=LOAD Jan 15 00:28:41.599000 audit: BPF prog-id=73 op=LOAD Jan 15 00:28:41.599000 audit: BPF prog-id=59 op=UNLOAD Jan 15 00:28:41.599000 audit: BPF prog-id=60 op=UNLOAD Jan 15 00:28:41.601000 audit: BPF prog-id=74 op=LOAD Jan 15 00:28:41.601000 audit: BPF prog-id=57 op=UNLOAD Jan 15 00:28:41.602000 audit: BPF prog-id=75 op=LOAD Jan 15 00:28:41.602000 audit: BPF prog-id=51 op=UNLOAD Jan 15 00:28:41.602000 audit: BPF prog-id=76 op=LOAD Jan 15 00:28:41.602000 audit: BPF prog-id=77 op=LOAD Jan 15 00:28:41.602000 audit: BPF prog-id=52 op=UNLOAD Jan 15 00:28:41.602000 audit: BPF prog-id=53 op=UNLOAD Jan 15 00:28:41.603000 audit: BPF prog-id=78 op=LOAD Jan 15 00:28:41.603000 audit: BPF prog-id=48 op=UNLOAD Jan 15 00:28:41.604000 audit: BPF prog-id=79 op=LOAD Jan 15 00:28:41.604000 audit: BPF prog-id=80 op=LOAD Jan 15 00:28:41.604000 audit: BPF prog-id=49 op=UNLOAD Jan 15 00:28:41.604000 audit: BPF prog-id=50 op=UNLOAD Jan 15 00:28:41.625860 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 00:28:41.625996 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 00:28:41.626386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:28:41.626512 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.4M memory peak. Jan 15 00:28:41.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:28:41.629002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:28:41.849138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:28:41.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:41.865375 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:28:41.948701 kubelet[2431]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:28:41.948701 kubelet[2431]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:28:41.948701 kubelet[2431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:28:41.949291 kubelet[2431]: I0115 00:28:41.949008 2431 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:28:42.125082 kubelet[2431]: I0115 00:28:42.124860 2431 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:28:42.125082 kubelet[2431]: I0115 00:28:42.124919 2431 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:28:42.125428 kubelet[2431]: I0115 00:28:42.125239 2431 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:28:42.161010 kubelet[2431]: E0115 00:28:42.160889 2431 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:28:42.163451 kubelet[2431]: I0115 00:28:42.163379 2431 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:28:42.177842 kubelet[2431]: I0115 00:28:42.177734 2431 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:28:42.191836 kubelet[2431]: I0115 00:28:42.191603 2431 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:28:42.192688 kubelet[2431]: I0115 00:28:42.192570 2431 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:28:42.193608 kubelet[2431]: I0115 00:28:42.192668 2431 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:28:42.193872 kubelet[2431]: I0115 00:28:42.193659 2431 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:28:42.193872 kubelet[2431]: I0115 00:28:42.193679 2431 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:28:42.194205 kubelet[2431]: I0115 00:28:42.194119 2431 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:28:42.198844 kubelet[2431]: I0115 00:28:42.198695 2431 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:28:42.198902 kubelet[2431]: I0115 00:28:42.198768 2431 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:28:42.199077 kubelet[2431]: I0115 00:28:42.199024 2431 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:28:42.199077 kubelet[2431]: I0115 00:28:42.199084 2431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:28:42.203749 kubelet[2431]: I0115 00:28:42.203587 2431 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:28:42.204610 kubelet[2431]: I0115 00:28:42.204457 2431 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:28:42.205010 kubelet[2431]: W0115 00:28:42.204737 2431 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 15 00:28:42.205119 kubelet[2431]: W0115 00:28:42.204935 2431 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 15 00:28:42.205119 kubelet[2431]: E0115 00:28:42.205038 2431 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:28:42.205119 kubelet[2431]: E0115 00:28:42.205050 2431 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:28:42.205603 kubelet[2431]: W0115 00:28:42.205538 2431 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 00:28:42.209986 kubelet[2431]: I0115 00:28:42.209915 2431 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:28:42.210133 kubelet[2431]: I0115 00:28:42.210025 2431 server.go:1287] "Started kubelet" Jan 15 00:28:42.210665 kubelet[2431]: I0115 00:28:42.210436 2431 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:28:42.216501 kubelet[2431]: I0115 00:28:42.216301 2431 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:28:42.220885 kubelet[2431]: I0115 00:28:42.219107 2431 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:28:42.220885 kubelet[2431]: I0115 00:28:42.219655 2431 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:28:42.221289 kubelet[2431]: I0115 00:28:42.221178 2431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:28:42.222238 kubelet[2431]: I0115 00:28:42.221391 2431 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:28:42.222238 kubelet[2431]: I0115 00:28:42.221561 2431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:28:42.222628 kubelet[2431]: E0115 00:28:42.220161 2431 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ac002dc02aad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 00:28:42.209970905 +0000 UTC m=+0.337861734,LastTimestamp:2026-01-15 00:28:42.209970905 +0000 UTC m=+0.337861734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 00:28:42.223228 kubelet[2431]: I0115 00:28:42.223146 2431 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:28:42.223228 kubelet[2431]: I0115 00:28:42.223220 2431 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:28:42.224071 kubelet[2431]: W0115 00:28:42.224005 2431 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 15 00:28:42.224071 kubelet[2431]: E0115 00:28:42.224065 2431 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:28:42.224550 kubelet[2431]: E0115 00:28:42.224411 2431 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:28:42.225431 kubelet[2431]: E0115 00:28:42.225313 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 00:28:42.225637 kubelet[2431]: E0115 00:28:42.225533 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="200ms" Jan 15 00:28:42.225898 kubelet[2431]: I0115 00:28:42.225767 2431 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:28:42.226046 kubelet[2431]: I0115 00:28:42.225947 2431 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:28:42.227936 kubelet[2431]: I0115 00:28:42.227912 2431 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:28:42.229000 audit[2444]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.229000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe15ea67b0 a2=0 a3=0 items=0 ppid=2431 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 00:28:42.232000 audit[2445]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.232000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1ece7d80 a2=0 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 00:28:42.237000 audit[2447]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.237000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcb2b39190 a2=0 a3=0 items=0 ppid=2431 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:28:42.242000 audit[2449]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.242000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff4ee896d0 a2=0 a3=0 items=0 ppid=2431 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:28:42.250664 kubelet[2431]: I0115 00:28:42.250509 2431 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:28:42.250664 kubelet[2431]: I0115 00:28:42.250551 2431 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:28:42.250664 kubelet[2431]: I0115 00:28:42.250573 2431 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:28:42.256000 audit[2456]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.256000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffed84f6f10 a2=0 a3=0 items=0 ppid=2431 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 15 00:28:42.257720 kubelet[2431]: I0115 00:28:42.257621 2431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:28:42.258000 audit[2458]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.258000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd044fd1e0 a2=0 a3=0 items=0 ppid=2431 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 00:28:42.259000 audit[2457]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:42.259000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcbc014310 a2=0 a3=0 items=0 ppid=2431 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 00:28:42.261270 kubelet[2431]: I0115 00:28:42.260740 2431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:28:42.261270 kubelet[2431]: I0115 00:28:42.260900 2431 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:28:42.261270 kubelet[2431]: I0115 00:28:42.260954 2431 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:28:42.261270 kubelet[2431]: I0115 00:28:42.260991 2431 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:28:42.261270 kubelet[2431]: E0115 00:28:42.261050 2431 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:28:42.261000 audit[2460]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.261000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6742acb0 a2=0 a3=0 items=0 ppid=2431 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 00:28:42.262697 kubelet[2431]: W0115 00:28:42.262535 2431 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 15 00:28:42.262697 kubelet[2431]: E0115 00:28:42.262566 2431 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:28:42.263000 audit[2461]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:42.264000 audit[2462]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:42.263000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff99a59f70 a2=0 a3=0 items=0 ppid=2431 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.264000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe46ecfc10 a2=0 a3=0 items=0 ppid=2431 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 00:28:42.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 00:28:42.267000 audit[2464]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:42.267000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd310164f0 a2=0 a3=0 items=0 ppid=2431 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 00:28:42.270000 audit[2465]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:42.270000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff105153d0 a2=0 a3=0 items=0 ppid=2431 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:42.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 00:28:42.325695 kubelet[2431]: E0115 00:28:42.325543 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 00:28:42.329083 kubelet[2431]: I0115 00:28:42.328974 2431 policy_none.go:49] "None policy: Start" Jan 15 00:28:42.329233 kubelet[2431]: I0115 00:28:42.329134 2431 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:28:42.329233 kubelet[2431]: I0115 00:28:42.329218 2431 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:28:42.340538 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 00:28:42.361943 kubelet[2431]: E0115 00:28:42.361879 2431 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 00:28:42.368580 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 00:28:42.375362 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 00:28:42.393838 kubelet[2431]: I0115 00:28:42.393540 2431 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:28:42.394199 kubelet[2431]: I0115 00:28:42.394038 2431 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:28:42.394247 kubelet[2431]: I0115 00:28:42.394147 2431 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:28:42.394847 kubelet[2431]: I0115 00:28:42.394715 2431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:28:42.396086 kubelet[2431]: E0115 00:28:42.396019 2431 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:28:42.396158 kubelet[2431]: E0115 00:28:42.396127 2431 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 00:28:42.426861 kubelet[2431]: E0115 00:28:42.426680 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="400ms" Jan 15 00:28:42.497937 kubelet[2431]: I0115 00:28:42.497332 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:28:42.498262 kubelet[2431]: E0115 00:28:42.498227 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 15 00:28:42.576646 systemd[1]: Created slice kubepods-burstable-pod56b7dfd130d942d5981fa39cf5d35ed7.slice - libcontainer container kubepods-burstable-pod56b7dfd130d942d5981fa39cf5d35ed7.slice. Jan 15 00:28:42.598676 kubelet[2431]: E0115 00:28:42.598561 2431 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:28:42.603900 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 15 00:28:42.614042 kubelet[2431]: E0115 00:28:42.613943 2431 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:28:42.617994 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 15 00:28:42.622267 kubelet[2431]: E0115 00:28:42.622174 2431 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:28:42.626146 kubelet[2431]: I0115 00:28:42.625943 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:42.626146 kubelet[2431]: I0115 00:28:42.626025 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:42.626146 kubelet[2431]: I0115 00:28:42.626053 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:42.626146 kubelet[2431]: I0115 00:28:42.626068 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:42.626146 kubelet[2431]: I0115 00:28:42.626082 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56b7dfd130d942d5981fa39cf5d35ed7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56b7dfd130d942d5981fa39cf5d35ed7\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:42.626402 kubelet[2431]: I0115 00:28:42.626095 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56b7dfd130d942d5981fa39cf5d35ed7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56b7dfd130d942d5981fa39cf5d35ed7\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:42.626402 kubelet[2431]: I0115 00:28:42.626106 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:42.626402 kubelet[2431]: I0115 00:28:42.626120 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:42.626402 kubelet[2431]: I0115 00:28:42.626216 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56b7dfd130d942d5981fa39cf5d35ed7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56b7dfd130d942d5981fa39cf5d35ed7\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:42.701615 kubelet[2431]: I0115 00:28:42.701427 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:28:42.702236 kubelet[2431]: E0115 00:28:42.702172 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 15 00:28:42.828584 kubelet[2431]: E0115 00:28:42.828389 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="800ms" Jan 15 00:28:42.900238 kubelet[2431]: E0115 00:28:42.900005 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:42.901519 containerd[1681]: time="2026-01-15T00:28:42.901354323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56b7dfd130d942d5981fa39cf5d35ed7,Namespace:kube-system,Attempt:0,}" Jan 15 00:28:42.914741 kubelet[2431]: E0115 00:28:42.914646 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:42.915602 containerd[1681]: time="2026-01-15T00:28:42.915454272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 15 00:28:42.922861 kubelet[2431]: E0115 00:28:42.922678 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:42.923600 containerd[1681]: time="2026-01-15T00:28:42.923414894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 15 00:28:42.960187 containerd[1681]: time="2026-01-15T00:28:42.960095494Z" level=info msg="connecting to shim 55cf7c7a83759ccda8793964705c84632ef7833573941a08d091d1e36e7c520b" address="unix:///run/containerd/s/f707f016cc7d0e6ba8b9dbe78ad31df548fed27583d24dc8d816b299a555a83c" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:28:42.971243 containerd[1681]: time="2026-01-15T00:28:42.971144923Z" level=info msg="connecting to shim c633dd8c591ee7e93ddd58a40e6bf6dae90564aa1d922e03e2f3b8d4b1be5d78" address="unix:///run/containerd/s/99cc0c765a3aee8c9fa7deb7d04c8464a08c8cd65e6bcb1ff56539582af344a0" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:28:42.989887 containerd[1681]: time="2026-01-15T00:28:42.989206653Z" level=info msg="connecting to shim 3d115dd9d0f03ce59544a1d8201f4f1a804b82ad1fe16a0f93b56d8713f219cc" address="unix:///run/containerd/s/75c6c84e6a3fbf802f93311008709111ed21d82379157b53caaa0ec33f115cf8" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:28:43.024063 systemd[1]: Started cri-containerd-c633dd8c591ee7e93ddd58a40e6bf6dae90564aa1d922e03e2f3b8d4b1be5d78.scope - libcontainer container c633dd8c591ee7e93ddd58a40e6bf6dae90564aa1d922e03e2f3b8d4b1be5d78. Jan 15 00:28:43.041131 systemd[1]: Started cri-containerd-55cf7c7a83759ccda8793964705c84632ef7833573941a08d091d1e36e7c520b.scope - libcontainer container 55cf7c7a83759ccda8793964705c84632ef7833573941a08d091d1e36e7c520b. Jan 15 00:28:43.047181 systemd[1]: Started cri-containerd-3d115dd9d0f03ce59544a1d8201f4f1a804b82ad1fe16a0f93b56d8713f219cc.scope - libcontainer container 3d115dd9d0f03ce59544a1d8201f4f1a804b82ad1fe16a0f93b56d8713f219cc. Jan 15 00:28:43.052000 audit: BPF prog-id=81 op=LOAD Jan 15 00:28:43.057000 audit: BPF prog-id=82 op=LOAD Jan 15 00:28:43.057000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333364643863353931656537653933646464353861343065366266 Jan 15 00:28:43.057000 audit: BPF prog-id=82 op=UNLOAD Jan 15 00:28:43.057000 audit[2522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333364643863353931656537653933646464353861343065366266 Jan 15 00:28:43.058000 audit: BPF prog-id=83 op=LOAD Jan 15 00:28:43.058000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333364643863353931656537653933646464353861343065366266 Jan 15 00:28:43.058000 audit: BPF prog-id=84 op=LOAD Jan 15 00:28:43.058000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333364643863353931656537653933646464353861343065366266 Jan 15 00:28:43.058000 audit: BPF prog-id=84 op=UNLOAD Jan 15 00:28:43.058000 audit[2522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333364643863353931656537653933646464353861343065366266 Jan 15 00:28:43.058000 audit: BPF prog-id=83 op=UNLOAD Jan 15 00:28:43.058000 audit[2522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333364643863353931656537653933646464353861343065366266 Jan 15 00:28:43.058000 audit: BPF prog-id=85 op=LOAD Jan 15 00:28:43.058000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336333364643863353931656537653933646464353861343065366266 Jan 15 00:28:43.065000 audit: BPF prog-id=86 op=LOAD Jan 15 00:28:43.066000 audit: BPF prog-id=87 op=LOAD Jan 15 00:28:43.066000 audit[2523]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2474 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636637633761383337353963636461383739333936343730356338 Jan 15 00:28:43.066000 audit: BPF prog-id=87 op=UNLOAD Jan 15 00:28:43.066000 audit[2523]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636637633761383337353963636461383739333936343730356338 Jan 15 00:28:43.066000 audit: BPF prog-id=88 op=LOAD Jan 15 00:28:43.066000 audit[2523]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2474 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636637633761383337353963636461383739333936343730356338 Jan 15 00:28:43.067000 audit: BPF prog-id=89 op=LOAD Jan 15 00:28:43.067000 audit[2523]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2474 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636637633761383337353963636461383739333936343730356338 Jan 15 00:28:43.067000 audit: BPF prog-id=89 op=UNLOAD Jan 15 00:28:43.067000 audit[2523]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636637633761383337353963636461383739333936343730356338 Jan 15 00:28:43.067000 audit: BPF prog-id=88 op=UNLOAD Jan 15 00:28:43.067000 audit[2523]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636637633761383337353963636461383739333936343730356338 Jan 15 00:28:43.067000 audit: BPF prog-id=90 op=LOAD Jan 15 00:28:43.067000 audit[2523]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2474 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535636637633761383337353963636461383739333936343730356338 Jan 15 00:28:43.077000 audit: BPF prog-id=91 op=LOAD Jan 15 00:28:43.078000 audit: BPF prog-id=92 op=LOAD Jan 15 00:28:43.078000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2506 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364313135646439643066303363653539353434613164383230316634 Jan 15 00:28:43.080000 audit: BPF prog-id=92 op=UNLOAD Jan 15 00:28:43.080000 audit[2530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364313135646439643066303363653539353434613164383230316634 Jan 15 00:28:43.080000 audit: BPF prog-id=93 op=LOAD Jan 15 00:28:43.080000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2506 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364313135646439643066303363653539353434613164383230316634 Jan 15 00:28:43.080000 audit: BPF prog-id=94 op=LOAD Jan 15 00:28:43.080000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2506 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364313135646439643066303363653539353434613164383230316634 Jan 15 00:28:43.080000 audit: BPF prog-id=94 op=UNLOAD Jan 15 00:28:43.080000 audit[2530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364313135646439643066303363653539353434613164383230316634 Jan 15 00:28:43.080000 audit: BPF prog-id=93 op=UNLOAD Jan 15 00:28:43.080000 audit[2530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364313135646439643066303363653539353434613164383230316634 Jan 15 00:28:43.080000 audit: BPF prog-id=95 op=LOAD Jan 15 00:28:43.080000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2506 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364313135646439643066303363653539353434613164383230316634 Jan 15 00:28:43.105428 kubelet[2431]: I0115 00:28:43.105389 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:28:43.108432 kubelet[2431]: E0115 00:28:43.108343 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 15 00:28:43.151955 containerd[1681]: time="2026-01-15T00:28:43.150626390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c633dd8c591ee7e93ddd58a40e6bf6dae90564aa1d922e03e2f3b8d4b1be5d78\"" Jan 15 00:28:43.153002 kubelet[2431]: E0115 00:28:43.152358 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:43.154605 containerd[1681]: time="2026-01-15T00:28:43.153751117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56b7dfd130d942d5981fa39cf5d35ed7,Namespace:kube-system,Attempt:0,} returns sandbox id \"55cf7c7a83759ccda8793964705c84632ef7833573941a08d091d1e36e7c520b\"" Jan 15 00:28:43.156873 kubelet[2431]: E0115 00:28:43.156552 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:43.158163 containerd[1681]: time="2026-01-15T00:28:43.158097301Z" level=info msg="CreateContainer within sandbox \"c633dd8c591ee7e93ddd58a40e6bf6dae90564aa1d922e03e2f3b8d4b1be5d78\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 00:28:43.158250 containerd[1681]: time="2026-01-15T00:28:43.158204597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d115dd9d0f03ce59544a1d8201f4f1a804b82ad1fe16a0f93b56d8713f219cc\"" Jan 15 00:28:43.159064 kubelet[2431]: E0115 00:28:43.158995 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:43.161002 containerd[1681]: time="2026-01-15T00:28:43.159742603Z" level=info msg="CreateContainer within sandbox \"55cf7c7a83759ccda8793964705c84632ef7833573941a08d091d1e36e7c520b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 00:28:43.169174 containerd[1681]: time="2026-01-15T00:28:43.169090765Z" level=info msg="Container 7a61237b16db8fbe7e66e22a823aba233306622d1f44cffe915bddea0ef60472: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:28:43.179180 containerd[1681]: time="2026-01-15T00:28:43.179103744Z" level=info msg="CreateContainer within sandbox \"3d115dd9d0f03ce59544a1d8201f4f1a804b82ad1fe16a0f93b56d8713f219cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 00:28:43.183428 containerd[1681]: time="2026-01-15T00:28:43.183295014Z" level=info msg="Container 6a50e839db6aeceead95eb9e2e499712538422018d25ce9bdc81cd4968c479b5: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:28:43.190879 containerd[1681]: time="2026-01-15T00:28:43.190558795Z" level=info msg="CreateContainer within sandbox \"c633dd8c591ee7e93ddd58a40e6bf6dae90564aa1d922e03e2f3b8d4b1be5d78\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a61237b16db8fbe7e66e22a823aba233306622d1f44cffe915bddea0ef60472\"" Jan 15 00:28:43.192424 containerd[1681]: time="2026-01-15T00:28:43.192298563Z" level=info msg="StartContainer for \"7a61237b16db8fbe7e66e22a823aba233306622d1f44cffe915bddea0ef60472\"" Jan 15 00:28:43.194787 containerd[1681]: time="2026-01-15T00:28:43.194743428Z" level=info msg="connecting to shim 7a61237b16db8fbe7e66e22a823aba233306622d1f44cffe915bddea0ef60472" address="unix:///run/containerd/s/99cc0c765a3aee8c9fa7deb7d04c8464a08c8cd65e6bcb1ff56539582af344a0" protocol=ttrpc version=3 Jan 15 00:28:43.195256 containerd[1681]: time="2026-01-15T00:28:43.195134643Z" level=info msg="CreateContainer within sandbox \"55cf7c7a83759ccda8793964705c84632ef7833573941a08d091d1e36e7c520b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a50e839db6aeceead95eb9e2e499712538422018d25ce9bdc81cd4968c479b5\"" Jan 15 00:28:43.196869 containerd[1681]: time="2026-01-15T00:28:43.196153985Z" level=info msg="StartContainer for \"6a50e839db6aeceead95eb9e2e499712538422018d25ce9bdc81cd4968c479b5\"" Jan 15 00:28:43.197696 containerd[1681]: time="2026-01-15T00:28:43.197660829Z" level=info msg="connecting to shim 6a50e839db6aeceead95eb9e2e499712538422018d25ce9bdc81cd4968c479b5" address="unix:///run/containerd/s/f707f016cc7d0e6ba8b9dbe78ad31df548fed27583d24dc8d816b299a555a83c" protocol=ttrpc version=3 Jan 15 00:28:43.201148 containerd[1681]: time="2026-01-15T00:28:43.200978007Z" level=info msg="Container d4830a6eada3d267fb9a5faf06ef71ba360dda26e805ad7a6445260848dad50b: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:28:43.213581 containerd[1681]: time="2026-01-15T00:28:43.213531208Z" level=info msg="CreateContainer within sandbox \"3d115dd9d0f03ce59544a1d8201f4f1a804b82ad1fe16a0f93b56d8713f219cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4830a6eada3d267fb9a5faf06ef71ba360dda26e805ad7a6445260848dad50b\"" Jan 15 00:28:43.214594 containerd[1681]: time="2026-01-15T00:28:43.214568484Z" level=info msg="StartContainer for \"d4830a6eada3d267fb9a5faf06ef71ba360dda26e805ad7a6445260848dad50b\"" Jan 15 00:28:43.216523 containerd[1681]: time="2026-01-15T00:28:43.216224496Z" level=info msg="connecting to shim d4830a6eada3d267fb9a5faf06ef71ba360dda26e805ad7a6445260848dad50b" address="unix:///run/containerd/s/75c6c84e6a3fbf802f93311008709111ed21d82379157b53caaa0ec33f115cf8" protocol=ttrpc version=3 Jan 15 00:28:43.223924 kubelet[2431]: W0115 00:28:43.223685 2431 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 15 00:28:43.224055 kubelet[2431]: E0115 00:28:43.223923 2431 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:28:43.237427 systemd[1]: Started cri-containerd-7a61237b16db8fbe7e66e22a823aba233306622d1f44cffe915bddea0ef60472.scope - libcontainer container 7a61237b16db8fbe7e66e22a823aba233306622d1f44cffe915bddea0ef60472. Jan 15 00:28:43.263238 systemd[1]: Started cri-containerd-6a50e839db6aeceead95eb9e2e499712538422018d25ce9bdc81cd4968c479b5.scope - libcontainer container 6a50e839db6aeceead95eb9e2e499712538422018d25ce9bdc81cd4968c479b5. Jan 15 00:28:43.274227 systemd[1]: Started cri-containerd-d4830a6eada3d267fb9a5faf06ef71ba360dda26e805ad7a6445260848dad50b.scope - libcontainer container d4830a6eada3d267fb9a5faf06ef71ba360dda26e805ad7a6445260848dad50b. Jan 15 00:28:43.291000 audit: BPF prog-id=96 op=LOAD Jan 15 00:28:43.293000 audit: BPF prog-id=97 op=LOAD Jan 15 00:28:43.293000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2474 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353065383339646236616563656561643935656239653265343939 Jan 15 00:28:43.294000 audit: BPF prog-id=97 op=UNLOAD Jan 15 00:28:43.294000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353065383339646236616563656561643935656239653265343939 Jan 15 00:28:43.296000 audit: BPF prog-id=98 op=LOAD Jan 15 00:28:43.296000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2474 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353065383339646236616563656561643935656239653265343939 Jan 15 00:28:43.296000 audit: BPF prog-id=99 op=LOAD Jan 15 00:28:43.296000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2474 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353065383339646236616563656561643935656239653265343939 Jan 15 00:28:43.297000 audit: BPF prog-id=99 op=UNLOAD Jan 15 00:28:43.297000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353065383339646236616563656561643935656239653265343939 Jan 15 00:28:43.297000 audit: BPF prog-id=98 op=UNLOAD Jan 15 00:28:43.297000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353065383339646236616563656561643935656239653265343939 Jan 15 00:28:43.297000 audit: BPF prog-id=100 op=LOAD Jan 15 00:28:43.297000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2474 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661353065383339646236616563656561643935656239653265343939 Jan 15 00:28:43.298000 audit: BPF prog-id=101 op=LOAD Jan 15 00:28:43.298000 audit: BPF prog-id=102 op=LOAD Jan 15 00:28:43.298000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2494 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761363132333762313664623866626537653636653232613832336162 Jan 15 00:28:43.298000 audit: BPF prog-id=102 op=UNLOAD Jan 15 00:28:43.298000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761363132333762313664623866626537653636653232613832336162 Jan 15 00:28:43.299000 audit: BPF prog-id=103 op=LOAD Jan 15 00:28:43.299000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2494 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761363132333762313664623866626537653636653232613832336162 Jan 15 00:28:43.299000 audit: BPF prog-id=104 op=LOAD Jan 15 00:28:43.299000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2494 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761363132333762313664623866626537653636653232613832336162 Jan 15 00:28:43.299000 audit: BPF prog-id=104 op=UNLOAD Jan 15 00:28:43.299000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761363132333762313664623866626537653636653232613832336162 Jan 15 00:28:43.299000 audit: BPF prog-id=103 op=UNLOAD Jan 15 00:28:43.299000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761363132333762313664623866626537653636653232613832336162 Jan 15 00:28:43.299000 audit: BPF prog-id=105 op=LOAD Jan 15 00:28:43.299000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2494 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761363132333762313664623866626537653636653232613832336162 Jan 15 00:28:43.308000 audit: BPF prog-id=106 op=LOAD Jan 15 00:28:43.309000 audit: BPF prog-id=107 op=LOAD Jan 15 00:28:43.309000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2506 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434383330613665616461336432363766623961356661663036656637 Jan 15 00:28:43.309000 audit: BPF prog-id=107 op=UNLOAD Jan 15 00:28:43.309000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434383330613665616461336432363766623961356661663036656637 Jan 15 00:28:43.309000 audit: BPF prog-id=108 op=LOAD Jan 15 00:28:43.309000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2506 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434383330613665616461336432363766623961356661663036656637 Jan 15 00:28:43.309000 audit: BPF prog-id=109 op=LOAD Jan 15 00:28:43.309000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2506 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434383330613665616461336432363766623961356661663036656637 Jan 15 00:28:43.310000 audit: BPF prog-id=109 op=UNLOAD Jan 15 00:28:43.310000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434383330613665616461336432363766623961356661663036656637 Jan 15 00:28:43.310000 audit: BPF prog-id=108 op=UNLOAD Jan 15 00:28:43.310000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434383330613665616461336432363766623961356661663036656637 Jan 15 00:28:43.310000 audit: BPF prog-id=110 op=LOAD Jan 15 00:28:43.310000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2506 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:43.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434383330613665616461336432363766623961356661663036656637 Jan 15 00:28:43.368971 containerd[1681]: time="2026-01-15T00:28:43.368746467Z" level=info msg="StartContainer for \"6a50e839db6aeceead95eb9e2e499712538422018d25ce9bdc81cd4968c479b5\" returns successfully" Jan 15 00:28:43.375201 containerd[1681]: time="2026-01-15T00:28:43.375113323Z" level=info msg="StartContainer for \"7a61237b16db8fbe7e66e22a823aba233306622d1f44cffe915bddea0ef60472\" returns successfully" Jan 15 00:28:43.397562 containerd[1681]: time="2026-01-15T00:28:43.397413775Z" level=info msg="StartContainer for \"d4830a6eada3d267fb9a5faf06ef71ba360dda26e805ad7a6445260848dad50b\" returns successfully" Jan 15 00:28:43.912100 kubelet[2431]: I0115 00:28:43.911966 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:28:44.298294 kubelet[2431]: E0115 00:28:44.295423 2431 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:28:44.298294 kubelet[2431]: E0115 00:28:44.295663 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:44.304080 kubelet[2431]: E0115 00:28:44.303757 2431 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:28:44.306283 kubelet[2431]: E0115 00:28:44.306206 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:44.306616 kubelet[2431]: E0115 00:28:44.306367 2431 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:28:44.307228 kubelet[2431]: E0115 00:28:44.307028 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:44.900977 kubelet[2431]: E0115 00:28:44.900891 2431 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 15 00:28:44.997321 kubelet[2431]: I0115 00:28:44.997238 2431 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 00:28:44.997321 kubelet[2431]: E0115 00:28:44.997318 2431 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 15 00:28:45.025114 kubelet[2431]: I0115 00:28:45.025054 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:45.032257 kubelet[2431]: E0115 00:28:45.032169 2431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:45.032257 kubelet[2431]: I0115 00:28:45.032259 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:45.035249 kubelet[2431]: E0115 00:28:45.035194 2431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:45.035249 kubelet[2431]: I0115 00:28:45.035249 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:45.037134 kubelet[2431]: E0115 00:28:45.037068 2431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:45.200623 kubelet[2431]: I0115 00:28:45.200296 2431 apiserver.go:52] "Watching apiserver" Jan 15 00:28:45.223543 kubelet[2431]: I0115 00:28:45.223338 2431 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:28:45.306624 kubelet[2431]: I0115 00:28:45.306579 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:45.307379 kubelet[2431]: I0115 00:28:45.307207 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:45.307444 kubelet[2431]: I0115 00:28:45.307431 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:45.311379 kubelet[2431]: E0115 00:28:45.311292 2431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:45.311701 kubelet[2431]: E0115 00:28:45.311595 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:45.311763 kubelet[2431]: E0115 00:28:45.311735 2431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:45.315194 kubelet[2431]: E0115 00:28:45.311957 2431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:45.315194 kubelet[2431]: E0115 00:28:45.312119 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:45.315194 kubelet[2431]: E0115 00:28:45.312559 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:46.311878 kubelet[2431]: I0115 00:28:46.310419 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:46.311878 kubelet[2431]: I0115 00:28:46.310561 2431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:46.319857 kubelet[2431]: E0115 00:28:46.319748 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:46.327081 kubelet[2431]: E0115 00:28:46.327004 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:47.312034 kubelet[2431]: E0115 00:28:47.311982 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:47.313167 kubelet[2431]: E0115 00:28:47.313042 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:47.315085 systemd[1]: Reload requested from client PID 2710 ('systemctl') (unit session-7.scope)... Jan 15 00:28:47.315134 systemd[1]: Reloading... Jan 15 00:28:47.529886 zram_generator::config[2752]: No configuration found. Jan 15 00:28:47.849370 systemd[1]: Reloading finished in 533 ms. Jan 15 00:28:47.890610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:28:47.905755 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 00:28:47.906271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:28:47.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:47.906383 systemd[1]: kubelet.service: Consumed 1.101s CPU time, 132.3M memory peak. Jan 15 00:28:47.909926 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 15 00:28:47.910015 kernel: audit: type=1131 audit(1768436927.905:379): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:47.910623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:28:47.911000 audit: BPF prog-id=111 op=LOAD Jan 15 00:28:47.922918 kernel: audit: type=1334 audit(1768436927.911:380): prog-id=111 op=LOAD Jan 15 00:28:47.922995 kernel: audit: type=1334 audit(1768436927.911:381): prog-id=66 op=UNLOAD Jan 15 00:28:47.911000 audit: BPF prog-id=66 op=UNLOAD Jan 15 00:28:47.911000 audit: BPF prog-id=112 op=LOAD Jan 15 00:28:47.929585 kernel: audit: type=1334 audit(1768436927.911:382): prog-id=112 op=LOAD Jan 15 00:28:47.911000 audit: BPF prog-id=113 op=LOAD Jan 15 00:28:47.933386 kernel: audit: type=1334 audit(1768436927.911:383): prog-id=113 op=LOAD Jan 15 00:28:47.933589 kernel: audit: type=1334 audit(1768436927.911:384): prog-id=67 op=UNLOAD Jan 15 00:28:47.911000 audit: BPF prog-id=67 op=UNLOAD Jan 15 00:28:47.911000 audit: BPF prog-id=68 op=UNLOAD Jan 15 00:28:47.941422 kernel: audit: type=1334 audit(1768436927.911:385): prog-id=68 op=UNLOAD Jan 15 00:28:47.941549 kernel: audit: type=1334 audit(1768436927.913:386): prog-id=114 op=LOAD Jan 15 00:28:47.913000 audit: BPF prog-id=114 op=LOAD Jan 15 00:28:47.913000 audit: BPF prog-id=65 op=UNLOAD Jan 15 00:28:47.949988 kernel: audit: type=1334 audit(1768436927.913:387): prog-id=65 op=UNLOAD Jan 15 00:28:47.950043 kernel: audit: type=1334 audit(1768436927.915:388): prog-id=115 op=LOAD Jan 15 00:28:47.915000 audit: BPF prog-id=115 op=LOAD Jan 15 00:28:47.915000 audit: BPF prog-id=75 op=UNLOAD Jan 15 00:28:47.915000 audit: BPF prog-id=116 op=LOAD Jan 15 00:28:47.915000 audit: BPF prog-id=117 op=LOAD Jan 15 00:28:47.915000 audit: BPF prog-id=76 op=UNLOAD Jan 15 00:28:47.915000 audit: BPF prog-id=77 op=UNLOAD Jan 15 00:28:47.917000 audit: BPF prog-id=118 op=LOAD Jan 15 00:28:47.917000 audit: BPF prog-id=78 op=UNLOAD Jan 15 00:28:47.917000 audit: BPF prog-id=119 op=LOAD Jan 15 00:28:47.917000 audit: BPF prog-id=120 op=LOAD Jan 15 00:28:47.917000 audit: BPF prog-id=79 op=UNLOAD Jan 15 00:28:47.917000 audit: BPF prog-id=80 op=UNLOAD Jan 15 00:28:47.919000 audit: BPF prog-id=121 op=LOAD Jan 15 00:28:47.929000 audit: BPF prog-id=62 op=UNLOAD Jan 15 00:28:47.929000 audit: BPF prog-id=122 op=LOAD Jan 15 00:28:47.929000 audit: BPF prog-id=123 op=LOAD Jan 15 00:28:47.929000 audit: BPF prog-id=63 op=UNLOAD Jan 15 00:28:47.929000 audit: BPF prog-id=64 op=UNLOAD Jan 15 00:28:47.931000 audit: BPF prog-id=124 op=LOAD Jan 15 00:28:47.931000 audit: BPF prog-id=74 op=UNLOAD Jan 15 00:28:47.934000 audit: BPF prog-id=125 op=LOAD Jan 15 00:28:47.934000 audit: BPF prog-id=71 op=UNLOAD Jan 15 00:28:47.934000 audit: BPF prog-id=126 op=LOAD Jan 15 00:28:47.934000 audit: BPF prog-id=127 op=LOAD Jan 15 00:28:47.934000 audit: BPF prog-id=72 op=UNLOAD Jan 15 00:28:47.934000 audit: BPF prog-id=73 op=UNLOAD Jan 15 00:28:47.935000 audit: BPF prog-id=128 op=LOAD Jan 15 00:28:47.935000 audit: BPF prog-id=129 op=LOAD Jan 15 00:28:47.935000 audit: BPF prog-id=69 op=UNLOAD Jan 15 00:28:47.935000 audit: BPF prog-id=70 op=UNLOAD Jan 15 00:28:47.937000 audit: BPF prog-id=130 op=LOAD Jan 15 00:28:47.937000 audit: BPF prog-id=61 op=UNLOAD Jan 15 00:28:48.186428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:28:48.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:28:48.208228 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:28:48.261062 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:28:48.261062 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:28:48.261062 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:28:48.261062 kubelet[2801]: I0115 00:28:48.261052 2801 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:28:48.270446 kubelet[2801]: I0115 00:28:48.270388 2801 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:28:48.270446 kubelet[2801]: I0115 00:28:48.270439 2801 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:28:48.270698 kubelet[2801]: I0115 00:28:48.270640 2801 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:28:48.271915 kubelet[2801]: I0115 00:28:48.271864 2801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 00:28:48.274089 kubelet[2801]: I0115 00:28:48.274034 2801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:28:48.278929 kubelet[2801]: I0115 00:28:48.278909 2801 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:28:48.288279 kubelet[2801]: I0115 00:28:48.288203 2801 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:28:48.288992 kubelet[2801]: I0115 00:28:48.288918 2801 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:28:48.289197 kubelet[2801]: I0115 00:28:48.288994 2801 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:28:48.289314 kubelet[2801]: I0115 00:28:48.289202 2801 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:28:48.289314 kubelet[2801]: I0115 00:28:48.289213 2801 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:28:48.289314 kubelet[2801]: I0115 00:28:48.289261 2801 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:28:48.289532 kubelet[2801]: I0115 00:28:48.289450 2801 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:28:48.289532 kubelet[2801]: I0115 00:28:48.289525 2801 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:28:48.289584 kubelet[2801]: I0115 00:28:48.289547 2801 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:28:48.289584 kubelet[2801]: I0115 00:28:48.289557 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:28:48.292061 kubelet[2801]: I0115 00:28:48.291689 2801 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:28:48.292631 kubelet[2801]: I0115 00:28:48.292541 2801 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:28:48.293163 kubelet[2801]: I0115 00:28:48.293090 2801 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:28:48.293163 kubelet[2801]: I0115 00:28:48.293153 2801 server.go:1287] "Started kubelet" Jan 15 00:28:48.295344 kubelet[2801]: I0115 00:28:48.295301 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:28:48.295841 kubelet[2801]: I0115 00:28:48.295684 2801 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:28:48.295841 kubelet[2801]: I0115 00:28:48.295832 2801 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:28:48.297214 kubelet[2801]: I0115 00:28:48.297124 2801 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:28:48.298302 kubelet[2801]: I0115 00:28:48.298254 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:28:48.300107 kubelet[2801]: I0115 00:28:48.299961 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:28:48.313165 kubelet[2801]: E0115 00:28:48.313074 2801 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:28:48.314440 kubelet[2801]: I0115 00:28:48.314246 2801 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:28:48.314440 kubelet[2801]: I0115 00:28:48.314422 2801 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:28:48.317555 kubelet[2801]: I0115 00:28:48.314694 2801 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:28:48.317555 kubelet[2801]: I0115 00:28:48.315564 2801 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:28:48.317555 kubelet[2801]: I0115 00:28:48.315737 2801 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:28:48.319711 kubelet[2801]: I0115 00:28:48.319383 2801 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:28:48.331714 kubelet[2801]: I0115 00:28:48.331559 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:28:48.334351 kubelet[2801]: I0115 00:28:48.333969 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:28:48.334351 kubelet[2801]: I0115 00:28:48.333996 2801 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:28:48.334351 kubelet[2801]: I0115 00:28:48.334014 2801 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:28:48.334351 kubelet[2801]: I0115 00:28:48.334022 2801 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:28:48.334351 kubelet[2801]: E0115 00:28:48.334067 2801 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:28:48.367704 kubelet[2801]: I0115 00:28:48.367633 2801 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:28:48.367704 kubelet[2801]: I0115 00:28:48.367673 2801 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:28:48.367704 kubelet[2801]: I0115 00:28:48.367693 2801 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:28:48.367947 kubelet[2801]: I0115 00:28:48.367893 2801 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 00:28:48.367947 kubelet[2801]: I0115 00:28:48.367905 2801 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 00:28:48.367947 kubelet[2801]: I0115 00:28:48.367922 2801 policy_none.go:49] "None policy: Start" Jan 15 00:28:48.367947 kubelet[2801]: I0115 00:28:48.367931 2801 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:28:48.367947 kubelet[2801]: I0115 00:28:48.367942 2801 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:28:48.368103 kubelet[2801]: I0115 00:28:48.368061 2801 state_mem.go:75] "Updated machine memory state" Jan 15 00:28:48.375119 kubelet[2801]: I0115 00:28:48.374948 2801 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:28:48.375119 kubelet[2801]: I0115 00:28:48.375099 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:28:48.375119 kubelet[2801]: I0115 00:28:48.375109 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:28:48.375616 kubelet[2801]: I0115 00:28:48.375560 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:28:48.376500 kubelet[2801]: E0115 00:28:48.376413 2801 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:28:48.435762 kubelet[2801]: I0115 00:28:48.435635 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:48.435999 kubelet[2801]: I0115 00:28:48.435635 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:48.436162 kubelet[2801]: I0115 00:28:48.436068 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:48.445504 kubelet[2801]: E0115 00:28:48.445302 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:48.445504 kubelet[2801]: E0115 00:28:48.445320 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:48.492075 kubelet[2801]: I0115 00:28:48.491993 2801 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:28:48.502853 kubelet[2801]: I0115 00:28:48.502298 2801 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 15 00:28:48.502853 kubelet[2801]: I0115 00:28:48.502373 2801 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 00:28:48.515406 kubelet[2801]: I0115 00:28:48.515323 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56b7dfd130d942d5981fa39cf5d35ed7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56b7dfd130d942d5981fa39cf5d35ed7\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:48.515554 kubelet[2801]: I0115 00:28:48.515421 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:48.515554 kubelet[2801]: I0115 00:28:48.515453 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:48.515554 kubelet[2801]: I0115 00:28:48.515545 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:48.515618 kubelet[2801]: I0115 00:28:48.515573 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:48.515618 kubelet[2801]: I0115 00:28:48.515596 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 00:28:48.515680 kubelet[2801]: I0115 00:28:48.515617 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56b7dfd130d942d5981fa39cf5d35ed7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56b7dfd130d942d5981fa39cf5d35ed7\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:48.515680 kubelet[2801]: I0115 00:28:48.515639 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56b7dfd130d942d5981fa39cf5d35ed7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56b7dfd130d942d5981fa39cf5d35ed7\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:48.515680 kubelet[2801]: I0115 00:28:48.515662 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:48.744277 kubelet[2801]: E0115 00:28:48.743983 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:48.746851 kubelet[2801]: E0115 00:28:48.746560 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:48.746916 kubelet[2801]: E0115 00:28:48.746897 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:49.290185 kubelet[2801]: I0115 00:28:49.289952 2801 apiserver.go:52] "Watching apiserver" Jan 15 00:28:49.315890 kubelet[2801]: I0115 00:28:49.315671 2801 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:28:49.353533 kubelet[2801]: I0115 00:28:49.353276 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:49.353533 kubelet[2801]: I0115 00:28:49.353512 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:49.353697 kubelet[2801]: E0115 00:28:49.353611 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:49.361747 kubelet[2801]: E0115 00:28:49.361711 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:28:49.362110 kubelet[2801]: E0115 00:28:49.361985 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:49.364050 kubelet[2801]: E0115 00:28:49.363928 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 15 00:28:49.364759 kubelet[2801]: E0115 00:28:49.364720 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:49.392526 kubelet[2801]: I0115 00:28:49.392227 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.392211389 podStartE2EDuration="1.392211389s" podCreationTimestamp="2026-01-15 00:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:28:49.382225527 +0000 UTC m=+1.167504629" watchObservedRunningTime="2026-01-15 00:28:49.392211389 +0000 UTC m=+1.177490490" Jan 15 00:28:49.401960 kubelet[2801]: I0115 00:28:49.401910 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.401892042 podStartE2EDuration="3.401892042s" podCreationTimestamp="2026-01-15 00:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:28:49.39242293 +0000 UTC m=+1.177702033" watchObservedRunningTime="2026-01-15 00:28:49.401892042 +0000 UTC m=+1.187171154" Jan 15 00:28:49.410876 kubelet[2801]: I0115 00:28:49.410724 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.41070484 podStartE2EDuration="3.41070484s" podCreationTimestamp="2026-01-15 00:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:28:49.401625247 +0000 UTC m=+1.186904360" watchObservedRunningTime="2026-01-15 00:28:49.41070484 +0000 UTC m=+1.195983952" Jan 15 00:28:50.355984 kubelet[2801]: E0115 00:28:50.355903 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:50.355984 kubelet[2801]: E0115 00:28:50.355917 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:50.357955 kubelet[2801]: E0115 00:28:50.357762 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:51.358217 kubelet[2801]: E0115 00:28:51.358021 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:54.113603 kubelet[2801]: I0115 00:28:54.113472 2801 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 00:28:54.114277 containerd[1681]: time="2026-01-15T00:28:54.114190308Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 00:28:54.114667 kubelet[2801]: I0115 00:28:54.114614 2801 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 00:28:55.213906 systemd[1]: Created slice kubepods-besteffort-podab2c84be_f9b9_4651_8b46_748902b336f5.slice - libcontainer container kubepods-besteffort-podab2c84be_f9b9_4651_8b46_748902b336f5.slice. Jan 15 00:28:55.292398 kubelet[2801]: W0115 00:28:55.292213 2801 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 15 00:28:55.292961 kubelet[2801]: W0115 00:28:55.292327 2801 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 15 00:28:55.292961 kubelet[2801]: E0115 00:28:55.292848 2801 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 15 00:28:55.293161 kubelet[2801]: E0115 00:28:55.293089 2801 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 15 00:28:55.302026 systemd[1]: Created slice kubepods-besteffort-pod4f954241_94d8_4a3d_a1ef_ae05bf87cefe.slice - libcontainer container kubepods-besteffort-pod4f954241_94d8_4a3d_a1ef_ae05bf87cefe.slice. Jan 15 00:28:55.346293 kubelet[2801]: E0115 00:28:55.346134 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:55.368214 kubelet[2801]: E0115 00:28:55.368181 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:55.386012 kubelet[2801]: I0115 00:28:55.385711 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab2c84be-f9b9-4651-8b46-748902b336f5-kube-proxy\") pod \"kube-proxy-h72wf\" (UID: \"ab2c84be-f9b9-4651-8b46-748902b336f5\") " pod="kube-system/kube-proxy-h72wf" Jan 15 00:28:55.386012 kubelet[2801]: I0115 00:28:55.385867 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab2c84be-f9b9-4651-8b46-748902b336f5-xtables-lock\") pod \"kube-proxy-h72wf\" (UID: \"ab2c84be-f9b9-4651-8b46-748902b336f5\") " pod="kube-system/kube-proxy-h72wf" Jan 15 00:28:55.386012 kubelet[2801]: I0115 00:28:55.385906 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab2c84be-f9b9-4651-8b46-748902b336f5-lib-modules\") pod \"kube-proxy-h72wf\" (UID: \"ab2c84be-f9b9-4651-8b46-748902b336f5\") " pod="kube-system/kube-proxy-h72wf" Jan 15 00:28:55.386012 kubelet[2801]: I0115 00:28:55.385931 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psj5\" (UniqueName: \"kubernetes.io/projected/ab2c84be-f9b9-4651-8b46-748902b336f5-kube-api-access-2psj5\") pod \"kube-proxy-h72wf\" (UID: \"ab2c84be-f9b9-4651-8b46-748902b336f5\") " pod="kube-system/kube-proxy-h72wf" Jan 15 00:28:55.487291 kubelet[2801]: I0115 00:28:55.486933 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8fh\" (UniqueName: \"kubernetes.io/projected/4f954241-94d8-4a3d-a1ef-ae05bf87cefe-kube-api-access-tm8fh\") pod \"tigera-operator-7dcd859c48-fbzc6\" (UID: \"4f954241-94d8-4a3d-a1ef-ae05bf87cefe\") " pod="tigera-operator/tigera-operator-7dcd859c48-fbzc6" Jan 15 00:28:55.487291 kubelet[2801]: I0115 00:28:55.487052 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f954241-94d8-4a3d-a1ef-ae05bf87cefe-var-lib-calico\") pod \"tigera-operator-7dcd859c48-fbzc6\" (UID: \"4f954241-94d8-4a3d-a1ef-ae05bf87cefe\") " pod="tigera-operator/tigera-operator-7dcd859c48-fbzc6" Jan 15 00:28:55.526825 kubelet[2801]: E0115 00:28:55.526715 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:55.528160 containerd[1681]: time="2026-01-15T00:28:55.527926947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h72wf,Uid:ab2c84be-f9b9-4651-8b46-748902b336f5,Namespace:kube-system,Attempt:0,}" Jan 15 00:28:55.599520 containerd[1681]: time="2026-01-15T00:28:55.599383110Z" level=info msg="connecting to shim 80e0d976d286179953c0370672cd20f2e576efba828b7268b50ccc091116e347" address="unix:///run/containerd/s/6bd9a034f8270943fba47882f7454fba8d75333d508734607e6eafdf804b2b49" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:28:55.693253 systemd[1]: Started cri-containerd-80e0d976d286179953c0370672cd20f2e576efba828b7268b50ccc091116e347.scope - libcontainer container 80e0d976d286179953c0370672cd20f2e576efba828b7268b50ccc091116e347. Jan 15 00:28:55.727000 audit: BPF prog-id=131 op=LOAD Jan 15 00:28:55.730923 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 15 00:28:55.731017 kernel: audit: type=1334 audit(1768436935.727:421): prog-id=131 op=LOAD Jan 15 00:28:55.735840 kernel: audit: type=1334 audit(1768436935.728:422): prog-id=132 op=LOAD Jan 15 00:28:55.735916 kernel: audit: type=1300 audit(1768436935.728:422): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.728000 audit: BPF prog-id=132 op=LOAD Jan 15 00:28:55.728000 audit[2872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.755569 kernel: audit: type=1327 audit(1768436935.728:422): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.728000 audit: BPF prog-id=132 op=UNLOAD Jan 15 00:28:55.760868 kernel: audit: type=1334 audit(1768436935.728:423): prog-id=132 op=UNLOAD Jan 15 00:28:55.728000 audit[2872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.771368 kernel: audit: type=1300 audit(1768436935.728:423): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.771540 kernel: audit: type=1327 audit(1768436935.728:423): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.775871 containerd[1681]: time="2026-01-15T00:28:55.775825048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h72wf,Uid:ab2c84be-f9b9-4651-8b46-748902b336f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"80e0d976d286179953c0370672cd20f2e576efba828b7268b50ccc091116e347\"" Jan 15 00:28:55.777552 kubelet[2801]: E0115 00:28:55.777449 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:55.781331 containerd[1681]: time="2026-01-15T00:28:55.781286273Z" level=info msg="CreateContainer within sandbox \"80e0d976d286179953c0370672cd20f2e576efba828b7268b50ccc091116e347\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 00:28:55.785989 kernel: audit: type=1334 audit(1768436935.728:424): prog-id=133 op=LOAD Jan 15 00:28:55.728000 audit: BPF prog-id=133 op=LOAD Jan 15 00:28:55.728000 audit[2872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.798568 kernel: audit: type=1300 audit(1768436935.728:424): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.808344 kernel: audit: type=1327 audit(1768436935.728:424): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.728000 audit: BPF prog-id=134 op=LOAD Jan 15 00:28:55.728000 audit[2872]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.728000 audit: BPF prog-id=134 op=UNLOAD Jan 15 00:28:55.728000 audit[2872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.728000 audit: BPF prog-id=133 op=UNLOAD Jan 15 00:28:55.728000 audit[2872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.728000 audit: BPF prog-id=135 op=LOAD Jan 15 00:28:55.728000 audit[2872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653064393736643238363137393935336330333730363732636432 Jan 15 00:28:55.809931 containerd[1681]: time="2026-01-15T00:28:55.809862700Z" level=info msg="Container a89c4fc709349f08119a351b88f6309c3631a5825ee049c86d875b9f147d36ad: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:28:55.821599 containerd[1681]: time="2026-01-15T00:28:55.821394856Z" level=info msg="CreateContainer within sandbox \"80e0d976d286179953c0370672cd20f2e576efba828b7268b50ccc091116e347\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a89c4fc709349f08119a351b88f6309c3631a5825ee049c86d875b9f147d36ad\"" Jan 15 00:28:55.823685 containerd[1681]: time="2026-01-15T00:28:55.822691514Z" level=info msg="StartContainer for \"a89c4fc709349f08119a351b88f6309c3631a5825ee049c86d875b9f147d36ad\"" Jan 15 00:28:55.825635 containerd[1681]: time="2026-01-15T00:28:55.825582365Z" level=info msg="connecting to shim a89c4fc709349f08119a351b88f6309c3631a5825ee049c86d875b9f147d36ad" address="unix:///run/containerd/s/6bd9a034f8270943fba47882f7454fba8d75333d508734607e6eafdf804b2b49" protocol=ttrpc version=3 Jan 15 00:28:55.861142 systemd[1]: Started cri-containerd-a89c4fc709349f08119a351b88f6309c3631a5825ee049c86d875b9f147d36ad.scope - libcontainer container a89c4fc709349f08119a351b88f6309c3631a5825ee049c86d875b9f147d36ad. Jan 15 00:28:55.960000 audit: BPF prog-id=136 op=LOAD Jan 15 00:28:55.960000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2861 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138396334666337303933343966303831313961333531623838663633 Jan 15 00:28:55.960000 audit: BPF prog-id=137 op=LOAD Jan 15 00:28:55.960000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2861 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138396334666337303933343966303831313961333531623838663633 Jan 15 00:28:55.960000 audit: BPF prog-id=137 op=UNLOAD Jan 15 00:28:55.960000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138396334666337303933343966303831313961333531623838663633 Jan 15 00:28:55.960000 audit: BPF prog-id=136 op=UNLOAD Jan 15 00:28:55.960000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138396334666337303933343966303831313961333531623838663633 Jan 15 00:28:55.960000 audit: BPF prog-id=138 op=LOAD Jan 15 00:28:55.960000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2861 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:55.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138396334666337303933343966303831313961333531623838663633 Jan 15 00:28:55.989462 containerd[1681]: time="2026-01-15T00:28:55.989292890Z" level=info msg="StartContainer for \"a89c4fc709349f08119a351b88f6309c3631a5825ee049c86d875b9f147d36ad\" returns successfully" Jan 15 00:28:56.200000 audit[2963]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.200000 audit[2963]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd8461b80 a2=0 a3=7fffd8461b6c items=0 ppid=2911 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 00:28:56.202000 audit[2962]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.202000 audit[2962]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa480dc50 a2=0 a3=7fffa480dc3c items=0 ppid=2911 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 00:28:56.204000 audit[2964]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.204000 audit[2964]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffab0ebfa0 a2=0 a3=7fffab0ebf8c items=0 ppid=2911 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 00:28:56.209000 audit[2967]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.209000 audit[2967]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8e6d9780 a2=0 a3=7ffe8e6d976c items=0 ppid=2911 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 00:28:56.212000 audit[2966]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.212000 audit[2966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1984c5a0 a2=0 a3=7fff1984c58c items=0 ppid=2911 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 00:28:56.215000 audit[2968]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.215000 audit[2968]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2059a4b0 a2=0 a3=7ffd2059a49c items=0 ppid=2911 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.215000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 00:28:56.306000 audit[2969]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.306000 audit[2969]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc694b8ad0 a2=0 a3=7ffc694b8abc items=0 ppid=2911 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 00:28:56.313000 audit[2971]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.313000 audit[2971]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc2eb660a0 a2=0 a3=7ffc2eb6608c items=0 ppid=2911 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 15 00:28:56.322000 audit[2974]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.322000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc9673ff00 a2=0 a3=7ffc9673feec items=0 ppid=2911 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 15 00:28:56.324000 audit[2975]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.324000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd10e20840 a2=0 a3=7ffd10e2082c items=0 ppid=2911 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 00:28:56.331000 audit[2977]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.331000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc0f83df10 a2=0 a3=7ffc0f83defc items=0 ppid=2911 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 00:28:56.334000 audit[2978]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.334000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf7e27bf0 a2=0 a3=7ffdf7e27bdc items=0 ppid=2911 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 00:28:56.341000 audit[2980]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.341000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc92ed6700 a2=0 a3=7ffc92ed66ec items=0 ppid=2911 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 00:28:56.349000 audit[2983]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.349000 audit[2983]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc301501b0 a2=0 a3=7ffc3015019c items=0 ppid=2911 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 15 00:28:56.352000 audit[2984]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.352000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd07ef2f0 a2=0 a3=7ffcd07ef2dc items=0 ppid=2911 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 00:28:56.357000 audit[2986]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.357000 audit[2986]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe6c1848e0 a2=0 a3=7ffe6c1848cc items=0 ppid=2911 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 00:28:56.359000 audit[2987]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.359000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe26adcf60 a2=0 a3=7ffe26adcf4c items=0 ppid=2911 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.359000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 00:28:56.364000 audit[2989]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.364000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7f8ca6d0 a2=0 a3=7ffd7f8ca6bc items=0 ppid=2911 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:28:56.375000 audit[2992]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.375000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff38b693d0 a2=0 a3=7fff38b693bc items=0 ppid=2911 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:28:56.377533 kubelet[2801]: E0115 00:28:56.377472 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:56.378208 kubelet[2801]: E0115 00:28:56.377580 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:56.386000 audit[2995]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.386000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0f7d2000 a2=0 a3=7fff0f7d1fec items=0 ppid=2911 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.386000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 00:28:56.389000 audit[2996]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.389000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2571ff10 a2=0 a3=7ffe2571fefc items=0 ppid=2911 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.389000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 00:28:56.395000 audit[2998]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.395000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe31989860 a2=0 a3=7ffe3198984c items=0 ppid=2911 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:28:56.404000 audit[3001]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.404000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcceda0260 a2=0 a3=7ffcceda024c items=0 ppid=2911 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:28:56.407000 audit[3002]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.407000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef4093570 a2=0 a3=7ffef409355c items=0 ppid=2911 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 00:28:56.413000 audit[3004]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:28:56.413000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffeb0e4b790 a2=0 a3=7ffeb0e4b77c items=0 ppid=2911 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 00:28:56.448000 audit[3010]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:28:56.448000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffefe430b10 a2=0 a3=7ffefe430afc items=0 ppid=2911 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:28:56.463000 audit[3010]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:28:56.463000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffefe430b10 a2=0 a3=7ffefe430afc items=0 ppid=2911 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:28:56.467000 audit[3015]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.467000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffedd5e4650 a2=0 a3=7ffedd5e463c items=0 ppid=2911 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 00:28:56.474000 audit[3017]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.474000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd86a22dd0 a2=0 a3=7ffd86a22dbc items=0 ppid=2911 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 15 00:28:56.483000 audit[3020]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.483000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd1ed71ee0 a2=0 a3=7ffd1ed71ecc items=0 ppid=2911 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 15 00:28:56.487000 audit[3021]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.487000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff732c73b0 a2=0 a3=7fff732c739c items=0 ppid=2911 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 00:28:56.492000 audit[3023]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.492000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe0d93670 a2=0 a3=7fffe0d9365c items=0 ppid=2911 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 00:28:56.495000 audit[3024]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.495000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd20460370 a2=0 a3=7ffd2046035c items=0 ppid=2911 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 00:28:56.500000 audit[3026]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.500000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff84ad9b40 a2=0 a3=7fff84ad9b2c items=0 ppid=2911 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 15 00:28:56.510000 audit[3029]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.510000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffebc513f70 a2=0 a3=7ffebc513f5c items=0 ppid=2911 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 00:28:56.513000 audit[3030]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.513000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff630be4a0 a2=0 a3=7fff630be48c items=0 ppid=2911 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 00:28:56.518000 audit[3032]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.518000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe960798c0 a2=0 a3=7ffe960798ac items=0 ppid=2911 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 00:28:56.520000 audit[3033]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.520000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffad05c00 a2=0 a3=7ffffad05bec items=0 ppid=2911 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 00:28:56.526000 audit[3035]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.526000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb9a51210 a2=0 a3=7ffeb9a511fc items=0 ppid=2911 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:28:56.533000 audit[3038]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.533000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffedc385ef0 a2=0 a3=7ffedc385edc items=0 ppid=2911 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 00:28:56.540000 audit[3041]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.540000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6babf730 a2=0 a3=7fff6babf71c items=0 ppid=2911 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 15 00:28:56.543000 audit[3042]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.543000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff25456c90 a2=0 a3=7fff25456c7c items=0 ppid=2911 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 00:28:56.548000 audit[3044]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.548000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff33f7f380 a2=0 a3=7fff33f7f36c items=0 ppid=2911 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:28:56.555000 audit[3047]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.555000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff53df9d0 a2=0 a3=7ffff53df9bc items=0 ppid=2911 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:28:56.558000 audit[3048]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.558000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6046ff10 a2=0 a3=7ffd6046fefc items=0 ppid=2911 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 00:28:56.564000 audit[3050]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.564000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc909db5f0 a2=0 a3=7ffc909db5dc items=0 ppid=2911 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 00:28:56.567000 audit[3051]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.567000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8472ca20 a2=0 a3=7ffe8472ca0c items=0 ppid=2911 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 00:28:56.572000 audit[3053]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.572000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe70480200 a2=0 a3=7ffe704801ec items=0 ppid=2911 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:28:56.579000 audit[3056]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:28:56.579000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed34b3410 a2=0 a3=7ffed34b33fc items=0 ppid=2911 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:28:56.586000 audit[3058]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 00:28:56.586000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff88cefde0 a2=0 a3=7fff88cefdcc items=0 ppid=2911 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.586000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:28:56.587000 audit[3058]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 00:28:56.587000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff88cefde0 a2=0 a3=7fff88cefdcc items=0 ppid=2911 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:56.587000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:28:56.601509 kubelet[2801]: E0115 00:28:56.601376 2801 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 15 00:28:56.601601 kubelet[2801]: E0115 00:28:56.601550 2801 projected.go:194] Error preparing data for projected volume kube-api-access-tm8fh for pod tigera-operator/tigera-operator-7dcd859c48-fbzc6: failed to sync configmap cache: timed out waiting for the condition Jan 15 00:28:56.601844 kubelet[2801]: E0115 00:28:56.601732 2801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f954241-94d8-4a3d-a1ef-ae05bf87cefe-kube-api-access-tm8fh podName:4f954241-94d8-4a3d-a1ef-ae05bf87cefe nodeName:}" failed. No retries permitted until 2026-01-15 00:28:57.101680854 +0000 UTC m=+8.886959966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tm8fh" (UniqueName: "kubernetes.io/projected/4f954241-94d8-4a3d-a1ef-ae05bf87cefe-kube-api-access-tm8fh") pod "tigera-operator-7dcd859c48-fbzc6" (UID: "4f954241-94d8-4a3d-a1ef-ae05bf87cefe") : failed to sync configmap cache: timed out waiting for the condition Jan 15 00:28:56.677169 kubelet[2801]: E0115 00:28:56.676908 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:56.691271 kubelet[2801]: I0115 00:28:56.691116 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h72wf" podStartSLOduration=1.691093307 podStartE2EDuration="1.691093307s" podCreationTimestamp="2026-01-15 00:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:28:56.39338839 +0000 UTC m=+8.178667502" watchObservedRunningTime="2026-01-15 00:28:56.691093307 +0000 UTC m=+8.476372439" Jan 15 00:28:57.379520 kubelet[2801]: E0115 00:28:57.379484 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:57.408447 containerd[1681]: time="2026-01-15T00:28:57.408337405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fbzc6,Uid:4f954241-94d8-4a3d-a1ef-ae05bf87cefe,Namespace:tigera-operator,Attempt:0,}" Jan 15 00:28:57.435172 containerd[1681]: time="2026-01-15T00:28:57.435094018Z" level=info msg="connecting to shim 261748816233e57313ee588bbf14b70a81ff2bbf216f64c53b54a99c5b003a3a" address="unix:///run/containerd/s/686102773f7264c854a8fa0ecf08ad48634aa8bb9e3af313630dbed78139401b" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:28:57.476077 systemd[1]: Started cri-containerd-261748816233e57313ee588bbf14b70a81ff2bbf216f64c53b54a99c5b003a3a.scope - libcontainer container 261748816233e57313ee588bbf14b70a81ff2bbf216f64c53b54a99c5b003a3a. Jan 15 00:28:57.492000 audit: BPF prog-id=139 op=LOAD Jan 15 00:28:57.493000 audit: BPF prog-id=140 op=LOAD Jan 15 00:28:57.493000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3069 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:57.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236313734383831363233336535373331336565353838626266313462 Jan 15 00:28:57.493000 audit: BPF prog-id=140 op=UNLOAD Jan 15 00:28:57.493000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3069 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:57.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236313734383831363233336535373331336565353838626266313462 Jan 15 00:28:57.493000 audit: BPF prog-id=141 op=LOAD Jan 15 00:28:57.493000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3069 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:57.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236313734383831363233336535373331336565353838626266313462 Jan 15 00:28:57.493000 audit: BPF prog-id=142 op=LOAD Jan 15 00:28:57.493000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3069 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:57.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236313734383831363233336535373331336565353838626266313462 Jan 15 00:28:57.493000 audit: BPF prog-id=142 op=UNLOAD Jan 15 00:28:57.493000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3069 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:57.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236313734383831363233336535373331336565353838626266313462 Jan 15 00:28:57.493000 audit: BPF prog-id=141 op=UNLOAD Jan 15 00:28:57.493000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3069 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:57.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236313734383831363233336535373331336565353838626266313462 Jan 15 00:28:57.493000 audit: BPF prog-id=143 op=LOAD Jan 15 00:28:57.493000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3069 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:28:57.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236313734383831363233336535373331336565353838626266313462 Jan 15 00:28:57.545935 containerd[1681]: time="2026-01-15T00:28:57.545895812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fbzc6,Uid:4f954241-94d8-4a3d-a1ef-ae05bf87cefe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"261748816233e57313ee588bbf14b70a81ff2bbf216f64c53b54a99c5b003a3a\"" Jan 15 00:28:57.548553 containerd[1681]: time="2026-01-15T00:28:57.548491302Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 00:28:58.782117 kubelet[2801]: E0115 00:28:58.781985 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:59.384099 kubelet[2801]: E0115 00:28:59.384039 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:28:59.488092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296737261.mount: Deactivated successfully. Jan 15 00:29:00.900045 update_engine[1649]: I20260115 00:29:00.899922 1649 update_attempter.cc:509] Updating boot flags... Jan 15 00:29:01.562222 containerd[1681]: time="2026-01-15T00:29:01.562015235Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:01.563442 containerd[1681]: time="2026-01-15T00:29:01.563357554Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 15 00:29:01.564920 containerd[1681]: time="2026-01-15T00:29:01.564868011Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:01.567208 containerd[1681]: time="2026-01-15T00:29:01.567128928Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:01.568096 containerd[1681]: time="2026-01-15T00:29:01.567992885Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.019434508s" Jan 15 00:29:01.568096 containerd[1681]: time="2026-01-15T00:29:01.568045923Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 15 00:29:01.571134 containerd[1681]: time="2026-01-15T00:29:01.571057759Z" level=info msg="CreateContainer within sandbox \"261748816233e57313ee588bbf14b70a81ff2bbf216f64c53b54a99c5b003a3a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 00:29:01.583315 containerd[1681]: time="2026-01-15T00:29:01.583220405Z" level=info msg="Container b3f86ed5feaab03df2030f9dd19c43d89d616508f1659232f5d0425d106d9d5c: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:29:01.593820 containerd[1681]: time="2026-01-15T00:29:01.593453082Z" level=info msg="CreateContainer within sandbox \"261748816233e57313ee588bbf14b70a81ff2bbf216f64c53b54a99c5b003a3a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b3f86ed5feaab03df2030f9dd19c43d89d616508f1659232f5d0425d106d9d5c\"" Jan 15 00:29:01.597051 containerd[1681]: time="2026-01-15T00:29:01.596991962Z" level=info msg="StartContainer for \"b3f86ed5feaab03df2030f9dd19c43d89d616508f1659232f5d0425d106d9d5c\"" Jan 15 00:29:01.598030 containerd[1681]: time="2026-01-15T00:29:01.597948272Z" level=info msg="connecting to shim b3f86ed5feaab03df2030f9dd19c43d89d616508f1659232f5d0425d106d9d5c" address="unix:///run/containerd/s/686102773f7264c854a8fa0ecf08ad48634aa8bb9e3af313630dbed78139401b" protocol=ttrpc version=3 Jan 15 00:29:01.634042 systemd[1]: Started cri-containerd-b3f86ed5feaab03df2030f9dd19c43d89d616508f1659232f5d0425d106d9d5c.scope - libcontainer container b3f86ed5feaab03df2030f9dd19c43d89d616508f1659232f5d0425d106d9d5c. Jan 15 00:29:01.654908 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 15 00:29:01.655038 kernel: audit: type=1334 audit(1768436941.651:493): prog-id=144 op=LOAD Jan 15 00:29:01.651000 audit: BPF prog-id=144 op=LOAD Jan 15 00:29:01.651000 audit: BPF prog-id=145 op=LOAD Jan 15 00:29:01.659095 kernel: audit: type=1334 audit(1768436941.651:494): prog-id=145 op=LOAD Jan 15 00:29:01.659148 kernel: audit: type=1300 audit(1768436941.651:494): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.651000 audit[3132]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.678883 kernel: audit: type=1327 audit(1768436941.651:494): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.678963 kernel: audit: type=1334 audit(1768436941.651:495): prog-id=145 op=UNLOAD Jan 15 00:29:01.651000 audit: BPF prog-id=145 op=UNLOAD Jan 15 00:29:01.651000 audit[3132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.690809 kernel: audit: type=1300 audit(1768436941.651:495): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.701284 kernel: audit: type=1327 audit(1768436941.651:495): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.701425 kernel: audit: type=1334 audit(1768436941.652:496): prog-id=146 op=LOAD Jan 15 00:29:01.652000 audit: BPF prog-id=146 op=LOAD Jan 15 00:29:01.703908 kernel: audit: type=1300 audit(1768436941.652:496): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.652000 audit[3132]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.724900 kernel: audit: type=1327 audit(1768436941.652:496): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.652000 audit: BPF prog-id=147 op=LOAD Jan 15 00:29:01.652000 audit[3132]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.652000 audit: BPF prog-id=147 op=UNLOAD Jan 15 00:29:01.652000 audit[3132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.652000 audit: BPF prog-id=146 op=UNLOAD Jan 15 00:29:01.652000 audit[3132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.652000 audit: BPF prog-id=148 op=LOAD Jan 15 00:29:01.652000 audit[3132]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3069 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:01.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663836656435666561616230336466323033306639646431396334 Jan 15 00:29:01.729343 containerd[1681]: time="2026-01-15T00:29:01.728936098Z" level=info msg="StartContainer for \"b3f86ed5feaab03df2030f9dd19c43d89d616508f1659232f5d0425d106d9d5c\" returns successfully" Jan 15 00:29:07.602091 sudo[1865]: pam_unix(sudo:session): session closed for user root Jan 15 00:29:07.601000 audit[1865]: USER_END pid=1865 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:29:07.604462 sshd[1864]: Connection closed by 10.0.0.1 port 35040 Jan 15 00:29:07.605474 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 15 00:29:07.605555 kernel: audit: type=1106 audit(1768436947.601:501): pid=1865 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:29:07.612656 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Jan 15 00:29:07.601000 audit[1865]: CRED_DISP pid=1865 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:29:07.623500 systemd[1]: sshd@6-10.0.0.47:22-10.0.0.1:35040.service: Deactivated successfully. Jan 15 00:29:07.628741 kernel: audit: type=1104 audit(1768436947.601:502): pid=1865 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:29:07.615000 audit[1861]: USER_END pid=1861 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:07.629179 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 00:29:07.629919 systemd[1]: session-7.scope: Consumed 4.943s CPU time, 212.9M memory peak. Jan 15 00:29:07.633377 systemd-logind[1646]: Session 7 logged out. Waiting for processes to exit. Jan 15 00:29:07.637377 systemd-logind[1646]: Removed session 7. Jan 15 00:29:07.645497 kernel: audit: type=1106 audit(1768436947.615:503): pid=1861 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:07.645714 kernel: audit: type=1104 audit(1768436947.615:504): pid=1861 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:07.615000 audit[1861]: CRED_DISP pid=1861 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:07.659288 kernel: audit: type=1131 audit(1768436947.625:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.47:22-10.0.0.1:35040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:07.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.47:22-10.0.0.1:35040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:08.087000 audit[3224]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:08.094888 kernel: audit: type=1325 audit(1768436948.087:506): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:08.087000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc995ad820 a2=0 a3=7ffc995ad80c items=0 ppid=2911 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:08.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:08.120342 kernel: audit: type=1300 audit(1768436948.087:506): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc995ad820 a2=0 a3=7ffc995ad80c items=0 ppid=2911 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:08.120451 kernel: audit: type=1327 audit(1768436948.087:506): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:08.120000 audit[3224]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:08.132826 kernel: audit: type=1325 audit(1768436948.120:507): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:08.120000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc995ad820 a2=0 a3=0 items=0 ppid=2911 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:08.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:08.143000 audit[3226]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:08.144833 kernel: audit: type=1300 audit(1768436948.120:507): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc995ad820 a2=0 a3=0 items=0 ppid=2911 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:08.143000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe14ef5d60 a2=0 a3=7ffe14ef5d4c items=0 ppid=2911 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:08.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:08.155000 audit[3226]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:08.155000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe14ef5d60 a2=0 a3=0 items=0 ppid=2911 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:08.155000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:10.524000 audit[3228]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:10.524000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd681bb120 a2=0 a3=7ffd681bb10c items=0 ppid=2911 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:10.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:10.530000 audit[3228]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:10.530000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd681bb120 a2=0 a3=0 items=0 ppid=2911 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:10.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:10.553000 audit[3230]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:10.553000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc7a9a48f0 a2=0 a3=7ffc7a9a48dc items=0 ppid=2911 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:10.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:10.562000 audit[3230]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:10.562000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7a9a48f0 a2=0 a3=0 items=0 ppid=2911 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:10.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:11.578000 audit[3232]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:11.578000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffffeb15f70 a2=0 a3=7ffffeb15f5c items=0 ppid=2911 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:11.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:11.583000 audit[3232]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:11.583000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffffeb15f70 a2=0 a3=0 items=0 ppid=2911 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:11.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:12.333932 kubelet[2801]: I0115 00:29:12.333709 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-fbzc6" podStartSLOduration=13.312487862 podStartE2EDuration="17.333676088s" podCreationTimestamp="2026-01-15 00:28:55 +0000 UTC" firstStartedPulling="2026-01-15 00:28:57.547887465 +0000 UTC m=+9.333166568" lastFinishedPulling="2026-01-15 00:29:01.569075691 +0000 UTC m=+13.354354794" observedRunningTime="2026-01-15 00:29:02.407710947 +0000 UTC m=+14.192990059" watchObservedRunningTime="2026-01-15 00:29:12.333676088 +0000 UTC m=+24.118955210" Jan 15 00:29:12.337351 kubelet[2801]: I0115 00:29:12.337124 2801 status_manager.go:890] "Failed to get status for pod" podUID="0d851a76-883a-498b-b902-262ca3845979" pod="calico-system/calico-typha-59d74fbc6b-9gnpt" err="pods \"calico-typha-59d74fbc6b-9gnpt\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Jan 15 00:29:12.350984 systemd[1]: Created slice kubepods-besteffort-pod0d851a76_883a_498b_b902_262ca3845979.slice - libcontainer container kubepods-besteffort-pod0d851a76_883a_498b_b902_262ca3845979.slice. Jan 15 00:29:12.404338 kubelet[2801]: I0115 00:29:12.404198 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jrd\" (UniqueName: \"kubernetes.io/projected/0d851a76-883a-498b-b902-262ca3845979-kube-api-access-57jrd\") pod \"calico-typha-59d74fbc6b-9gnpt\" (UID: \"0d851a76-883a-498b-b902-262ca3845979\") " pod="calico-system/calico-typha-59d74fbc6b-9gnpt" Jan 15 00:29:12.404338 kubelet[2801]: I0115 00:29:12.404291 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d851a76-883a-498b-b902-262ca3845979-tigera-ca-bundle\") pod \"calico-typha-59d74fbc6b-9gnpt\" (UID: \"0d851a76-883a-498b-b902-262ca3845979\") " pod="calico-system/calico-typha-59d74fbc6b-9gnpt" Jan 15 00:29:12.404338 kubelet[2801]: I0115 00:29:12.404329 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d851a76-883a-498b-b902-262ca3845979-typha-certs\") pod \"calico-typha-59d74fbc6b-9gnpt\" (UID: \"0d851a76-883a-498b-b902-262ca3845979\") " pod="calico-system/calico-typha-59d74fbc6b-9gnpt" Jan 15 00:29:12.554661 systemd[1]: Created slice kubepods-besteffort-podcaf1b669_c947_4a75_8149_0d4bf7b3f073.slice - libcontainer container kubepods-besteffort-podcaf1b669_c947_4a75_8149_0d4bf7b3f073.slice. Jan 15 00:29:12.570000 audit[3238]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:12.570000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd81e02a70 a2=0 a3=7ffd81e02a5c items=0 ppid=2911 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:12.582000 audit[3238]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:12.582000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd81e02a70 a2=0 a3=0 items=0 ppid=2911 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:12.605934 kubelet[2801]: I0115 00:29:12.605679 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-cni-log-dir\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.605934 kubelet[2801]: I0115 00:29:12.605742 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-flexvol-driver-host\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.605934 kubelet[2801]: I0115 00:29:12.605762 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-var-lib-calico\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.605934 kubelet[2801]: I0115 00:29:12.605826 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-lib-modules\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.605934 kubelet[2801]: I0115 00:29:12.605842 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/caf1b669-c947-4a75-8149-0d4bf7b3f073-node-certs\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.606167 kubelet[2801]: I0115 00:29:12.605855 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-xtables-lock\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.606167 kubelet[2801]: I0115 00:29:12.605867 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-var-run-calico\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.606167 kubelet[2801]: I0115 00:29:12.605882 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caf1b669-c947-4a75-8149-0d4bf7b3f073-tigera-ca-bundle\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.606167 kubelet[2801]: I0115 00:29:12.605899 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrhw\" (UniqueName: \"kubernetes.io/projected/caf1b669-c947-4a75-8149-0d4bf7b3f073-kube-api-access-cqrhw\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.606167 kubelet[2801]: I0115 00:29:12.605913 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-cni-bin-dir\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.606281 kubelet[2801]: I0115 00:29:12.605925 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-cni-net-dir\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.606281 kubelet[2801]: I0115 00:29:12.605939 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/caf1b669-c947-4a75-8149-0d4bf7b3f073-policysync\") pod \"calico-node-v2kd2\" (UID: \"caf1b669-c947-4a75-8149-0d4bf7b3f073\") " pod="calico-system/calico-node-v2kd2" Jan 15 00:29:12.663348 kubelet[2801]: E0115 00:29:12.663255 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:12.664447 containerd[1681]: time="2026-01-15T00:29:12.664285629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d74fbc6b-9gnpt,Uid:0d851a76-883a-498b-b902-262ca3845979,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:12.695376 containerd[1681]: time="2026-01-15T00:29:12.695266845Z" level=info msg="connecting to shim 9bd208acc0509674d6f5cc28ea425d5ffdb1c04b12878a1306c7c554da66f6c6" address="unix:///run/containerd/s/5b988abd92a353e484dc71ee96bccc86f46c68d28fc34bf0f84087d7d69688bf" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:12.730833 kubelet[2801]: E0115 00:29:12.727967 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.730833 kubelet[2801]: W0115 00:29:12.727995 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.730833 kubelet[2801]: E0115 00:29:12.728052 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.735037 kubelet[2801]: E0115 00:29:12.734177 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.735037 kubelet[2801]: W0115 00:29:12.734209 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.735037 kubelet[2801]: E0115 00:29:12.734243 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.749887 kubelet[2801]: E0115 00:29:12.749037 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:12.772431 systemd[1]: Started cri-containerd-9bd208acc0509674d6f5cc28ea425d5ffdb1c04b12878a1306c7c554da66f6c6.scope - libcontainer container 9bd208acc0509674d6f5cc28ea425d5ffdb1c04b12878a1306c7c554da66f6c6. Jan 15 00:29:12.804896 kubelet[2801]: E0115 00:29:12.804826 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.804896 kubelet[2801]: W0115 00:29:12.804867 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.804896 kubelet[2801]: E0115 00:29:12.804891 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.805360 kubelet[2801]: E0115 00:29:12.805237 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.805360 kubelet[2801]: W0115 00:29:12.805271 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.805360 kubelet[2801]: E0115 00:29:12.805282 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.805754 kubelet[2801]: E0115 00:29:12.805680 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.805754 kubelet[2801]: W0115 00:29:12.805714 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.805754 kubelet[2801]: E0115 00:29:12.805724 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.806341 kubelet[2801]: E0115 00:29:12.806225 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.806341 kubelet[2801]: W0115 00:29:12.806322 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.806477 kubelet[2801]: E0115 00:29:12.806342 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.806921 kubelet[2801]: E0115 00:29:12.806860 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.806921 kubelet[2801]: W0115 00:29:12.806902 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.806921 kubelet[2801]: E0115 00:29:12.806914 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.807260 kubelet[2801]: E0115 00:29:12.807137 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.807260 kubelet[2801]: W0115 00:29:12.807184 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.807260 kubelet[2801]: E0115 00:29:12.807194 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.807466 kubelet[2801]: E0115 00:29:12.807451 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.807466 kubelet[2801]: W0115 00:29:12.807462 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.807514 kubelet[2801]: E0115 00:29:12.807471 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.808297 kubelet[2801]: E0115 00:29:12.808240 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.808297 kubelet[2801]: W0115 00:29:12.808274 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.808297 kubelet[2801]: E0115 00:29:12.808325 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.809552 kubelet[2801]: E0115 00:29:12.808838 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.809552 kubelet[2801]: W0115 00:29:12.808849 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.809552 kubelet[2801]: E0115 00:29:12.808860 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.810485 kubelet[2801]: E0115 00:29:12.810424 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.810485 kubelet[2801]: W0115 00:29:12.810438 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.810485 kubelet[2801]: E0115 00:29:12.810448 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.813435 kubelet[2801]: E0115 00:29:12.813349 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.813435 kubelet[2801]: W0115 00:29:12.813432 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.813503 kubelet[2801]: E0115 00:29:12.813445 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.812000 audit: BPF prog-id=149 op=LOAD Jan 15 00:29:12.814277 kubelet[2801]: E0115 00:29:12.814163 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.814277 kubelet[2801]: W0115 00:29:12.814205 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.814277 kubelet[2801]: E0115 00:29:12.814218 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.814656 kubelet[2801]: E0115 00:29:12.814586 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.814656 kubelet[2801]: W0115 00:29:12.814625 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.814656 kubelet[2801]: E0115 00:29:12.814635 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.815014 kubelet[2801]: E0115 00:29:12.814912 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.815014 kubelet[2801]: W0115 00:29:12.814947 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.815014 kubelet[2801]: E0115 00:29:12.814956 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.815285 kubelet[2801]: E0115 00:29:12.815149 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.815285 kubelet[2801]: W0115 00:29:12.815186 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.815285 kubelet[2801]: E0115 00:29:12.815196 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.815874 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 15 00:29:12.815927 kernel: audit: type=1334 audit(1768436952.812:518): prog-id=149 op=LOAD Jan 15 00:29:12.818021 kubelet[2801]: E0115 00:29:12.817876 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.818021 kubelet[2801]: W0115 00:29:12.817912 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.818021 kubelet[2801]: E0115 00:29:12.817927 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.818976 kubelet[2801]: E0115 00:29:12.818892 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.819037 kubelet[2801]: W0115 00:29:12.818997 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.819037 kubelet[2801]: E0115 00:29:12.819010 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822250 kernel: audit: type=1334 audit(1768436952.812:519): prog-id=150 op=LOAD Jan 15 00:29:12.812000 audit: BPF prog-id=150 op=LOAD Jan 15 00:29:12.822486 kubelet[2801]: E0115 00:29:12.819360 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822486 kubelet[2801]: W0115 00:29:12.819375 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822486 kubelet[2801]: E0115 00:29:12.819429 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822486 kubelet[2801]: E0115 00:29:12.819964 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822486 kubelet[2801]: W0115 00:29:12.819974 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822486 kubelet[2801]: E0115 00:29:12.819984 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822486 kubelet[2801]: E0115 00:29:12.820201 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822486 kubelet[2801]: W0115 00:29:12.820213 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822486 kubelet[2801]: E0115 00:29:12.820225 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822486 kubelet[2801]: E0115 00:29:12.820852 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822680 kubelet[2801]: W0115 00:29:12.820862 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822680 kubelet[2801]: E0115 00:29:12.820871 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822680 kubelet[2801]: I0115 00:29:12.820926 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19-kubelet-dir\") pod \"csi-node-driver-z9lkl\" (UID: \"44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19\") " pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:12.822680 kubelet[2801]: E0115 00:29:12.821472 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822680 kubelet[2801]: W0115 00:29:12.821483 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822680 kubelet[2801]: E0115 00:29:12.821496 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822680 kubelet[2801]: E0115 00:29:12.821889 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822680 kubelet[2801]: W0115 00:29:12.821898 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822680 kubelet[2801]: E0115 00:29:12.821944 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822986 kubelet[2801]: E0115 00:29:12.822273 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822986 kubelet[2801]: W0115 00:29:12.822288 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822986 kubelet[2801]: E0115 00:29:12.822301 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822986 kubelet[2801]: I0115 00:29:12.822322 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19-varrun\") pod \"csi-node-driver-z9lkl\" (UID: \"44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19\") " pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:12.822986 kubelet[2801]: E0115 00:29:12.822664 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.822986 kubelet[2801]: W0115 00:29:12.822674 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.822986 kubelet[2801]: E0115 00:29:12.822684 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.822986 kubelet[2801]: I0115 00:29:12.822699 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19-registration-dir\") pod \"csi-node-driver-z9lkl\" (UID: \"44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19\") " pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:12.812000 audit[3259]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.829026 kubelet[2801]: E0115 00:29:12.823372 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829026 kubelet[2801]: W0115 00:29:12.823384 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829026 kubelet[2801]: E0115 00:29:12.823477 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829026 kubelet[2801]: I0115 00:29:12.824638 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19-socket-dir\") pod \"csi-node-driver-z9lkl\" (UID: \"44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19\") " pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:12.829026 kubelet[2801]: E0115 00:29:12.824854 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829026 kubelet[2801]: W0115 00:29:12.824863 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829026 kubelet[2801]: E0115 00:29:12.824909 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829026 kubelet[2801]: E0115 00:29:12.825512 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829026 kubelet[2801]: W0115 00:29:12.825624 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829212 kubelet[2801]: E0115 00:29:12.825833 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829212 kubelet[2801]: E0115 00:29:12.826052 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829212 kubelet[2801]: W0115 00:29:12.826062 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829212 kubelet[2801]: E0115 00:29:12.826188 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829212 kubelet[2801]: E0115 00:29:12.826345 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829212 kubelet[2801]: W0115 00:29:12.826354 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829212 kubelet[2801]: E0115 00:29:12.826528 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829212 kubelet[2801]: E0115 00:29:12.826879 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829212 kubelet[2801]: W0115 00:29:12.826896 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829212 kubelet[2801]: E0115 00:29:12.826980 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829494 kubelet[2801]: I0115 00:29:12.827002 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8bmw\" (UniqueName: \"kubernetes.io/projected/44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19-kube-api-access-x8bmw\") pod \"csi-node-driver-z9lkl\" (UID: \"44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19\") " pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:12.829494 kubelet[2801]: E0115 00:29:12.827319 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829494 kubelet[2801]: W0115 00:29:12.827330 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829494 kubelet[2801]: E0115 00:29:12.827339 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829494 kubelet[2801]: E0115 00:29:12.827714 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829494 kubelet[2801]: W0115 00:29:12.827723 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829494 kubelet[2801]: E0115 00:29:12.827756 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829494 kubelet[2801]: E0115 00:29:12.828130 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829494 kubelet[2801]: W0115 00:29:12.828141 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829649 kubelet[2801]: E0115 00:29:12.828152 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.829649 kubelet[2801]: E0115 00:29:12.828496 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.829649 kubelet[2801]: W0115 00:29:12.828508 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.829649 kubelet[2801]: E0115 00:29:12.828518 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.846432 kernel: audit: type=1300 audit(1768436952.812:519): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.846498 kernel: audit: type=1327 audit(1768436952.812:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.812000 audit: BPF prog-id=150 op=UNLOAD Jan 15 00:29:12.849266 kernel: audit: type=1334 audit(1768436952.812:520): prog-id=150 op=UNLOAD Jan 15 00:29:12.849313 kernel: audit: type=1300 audit(1768436952.812:520): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.812000 audit[3259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.862724 kubelet[2801]: E0115 00:29:12.862601 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:12.868181 containerd[1681]: time="2026-01-15T00:29:12.867876961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2kd2,Uid:caf1b669-c947-4a75-8149-0d4bf7b3f073,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:12.875258 kernel: audit: type=1327 audit(1768436952.812:520): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.878909 kernel: audit: type=1334 audit(1768436952.815:521): prog-id=151 op=LOAD Jan 15 00:29:12.815000 audit: BPF prog-id=151 op=LOAD Jan 15 00:29:12.815000 audit[3259]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.892925 kernel: audit: type=1300 audit(1768436952.815:521): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.893239 kernel: audit: type=1327 audit(1768436952.815:521): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.815000 audit: BPF prog-id=152 op=LOAD Jan 15 00:29:12.815000 audit[3259]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.815000 audit: BPF prog-id=152 op=UNLOAD Jan 15 00:29:12.815000 audit[3259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.815000 audit: BPF prog-id=151 op=UNLOAD Jan 15 00:29:12.815000 audit[3259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.815000 audit: BPF prog-id=153 op=LOAD Jan 15 00:29:12.815000 audit[3259]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3247 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:12.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643230386163633035303936373464366635636332386561343235 Jan 15 00:29:12.912123 containerd[1681]: time="2026-01-15T00:29:12.911751632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d74fbc6b-9gnpt,Uid:0d851a76-883a-498b-b902-262ca3845979,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bd208acc0509674d6f5cc28ea425d5ffdb1c04b12878a1306c7c554da66f6c6\"" Jan 15 00:29:12.913249 kubelet[2801]: E0115 00:29:12.913108 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:12.914070 containerd[1681]: time="2026-01-15T00:29:12.913997478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 00:29:12.927000 containerd[1681]: time="2026-01-15T00:29:12.926956315Z" level=info msg="connecting to shim 8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e" address="unix:///run/containerd/s/4122ec011e33fbc7bf9d5ec830e80a5fe361c2585b094da1688bc9d6a5ea6e14" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:12.929940 kubelet[2801]: E0115 00:29:12.929699 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.929940 kubelet[2801]: W0115 00:29:12.929755 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.929940 kubelet[2801]: E0115 00:29:12.929855 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.930528 kubelet[2801]: E0115 00:29:12.930320 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.930528 kubelet[2801]: W0115 00:29:12.930336 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.930528 kubelet[2801]: E0115 00:29:12.930352 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.931238 kubelet[2801]: E0115 00:29:12.930722 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.931238 kubelet[2801]: W0115 00:29:12.930734 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.931238 kubelet[2801]: E0115 00:29:12.930757 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.931238 kubelet[2801]: E0115 00:29:12.931170 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.931238 kubelet[2801]: W0115 00:29:12.931181 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.931238 kubelet[2801]: E0115 00:29:12.931197 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.931956 kubelet[2801]: E0115 00:29:12.931533 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.931956 kubelet[2801]: W0115 00:29:12.931543 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.931956 kubelet[2801]: E0115 00:29:12.931578 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.932059 kubelet[2801]: E0115 00:29:12.931978 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.932059 kubelet[2801]: W0115 00:29:12.931988 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.932366 kubelet[2801]: E0115 00:29:12.932244 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.932476 kubelet[2801]: E0115 00:29:12.932420 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.932476 kubelet[2801]: W0115 00:29:12.932433 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.932551 kubelet[2801]: E0115 00:29:12.932487 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.933204 kubelet[2801]: E0115 00:29:12.933003 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.933204 kubelet[2801]: W0115 00:29:12.933013 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.933877 kubelet[2801]: E0115 00:29:12.933331 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.933877 kubelet[2801]: E0115 00:29:12.933652 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.933877 kubelet[2801]: W0115 00:29:12.933661 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.933877 kubelet[2801]: E0115 00:29:12.933863 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.934255 kubelet[2801]: E0115 00:29:12.934136 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.934255 kubelet[2801]: W0115 00:29:12.934145 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.934255 kubelet[2801]: E0115 00:29:12.934196 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.934678 kubelet[2801]: E0115 00:29:12.934376 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.934678 kubelet[2801]: W0115 00:29:12.934384 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.934678 kubelet[2801]: E0115 00:29:12.934589 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.935227 kubelet[2801]: E0115 00:29:12.935012 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.935227 kubelet[2801]: W0115 00:29:12.935021 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.935227 kubelet[2801]: E0115 00:29:12.935143 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.935717 kubelet[2801]: E0115 00:29:12.935687 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.935717 kubelet[2801]: W0115 00:29:12.935702 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.936030 kubelet[2801]: E0115 00:29:12.936015 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.937110 kubelet[2801]: E0115 00:29:12.937003 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.937110 kubelet[2801]: W0115 00:29:12.937040 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.937186 kubelet[2801]: E0115 00:29:12.937170 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.937644 kubelet[2801]: E0115 00:29:12.937627 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.937730 kubelet[2801]: W0115 00:29:12.937695 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.937932 kubelet[2801]: E0115 00:29:12.937878 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.938172 kubelet[2801]: E0115 00:29:12.938142 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.938172 kubelet[2801]: W0115 00:29:12.938171 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.938523 kubelet[2801]: E0115 00:29:12.938490 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.938839 kubelet[2801]: E0115 00:29:12.938724 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.938839 kubelet[2801]: W0115 00:29:12.938833 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.939170 kubelet[2801]: E0115 00:29:12.939058 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.939170 kubelet[2801]: E0115 00:29:12.939144 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.939170 kubelet[2801]: W0115 00:29:12.939154 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.939479 kubelet[2801]: E0115 00:29:12.939207 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.939699 kubelet[2801]: E0115 00:29:12.939658 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.939699 kubelet[2801]: W0115 00:29:12.939678 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.940168 kubelet[2801]: E0115 00:29:12.939830 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.940168 kubelet[2801]: E0115 00:29:12.940069 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.940168 kubelet[2801]: W0115 00:29:12.940078 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.940510 kubelet[2801]: E0115 00:29:12.940200 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.940510 kubelet[2801]: E0115 00:29:12.940493 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.940510 kubelet[2801]: W0115 00:29:12.940503 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.940914 kubelet[2801]: E0115 00:29:12.940761 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.941070 kubelet[2801]: E0115 00:29:12.940959 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.941070 kubelet[2801]: W0115 00:29:12.940967 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.941070 kubelet[2801]: E0115 00:29:12.940980 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.941427 kubelet[2801]: E0115 00:29:12.941246 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.941427 kubelet[2801]: W0115 00:29:12.941264 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.941427 kubelet[2801]: E0115 00:29:12.941423 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.941748 kubelet[2801]: E0115 00:29:12.941734 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.941748 kubelet[2801]: W0115 00:29:12.941745 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.941870 kubelet[2801]: E0115 00:29:12.941759 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.942331 kubelet[2801]: E0115 00:29:12.942298 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.942331 kubelet[2801]: W0115 00:29:12.942332 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.942458 kubelet[2801]: E0115 00:29:12.942342 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.954436 kubelet[2801]: E0115 00:29:12.954345 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:12.954436 kubelet[2801]: W0115 00:29:12.954382 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:12.954436 kubelet[2801]: E0115 00:29:12.954436 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:12.978875 systemd[1]: Started cri-containerd-8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e.scope - libcontainer container 8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e. Jan 15 00:29:13.018000 audit: BPF prog-id=154 op=LOAD Jan 15 00:29:13.019000 audit: BPF prog-id=155 op=LOAD Jan 15 00:29:13.019000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3341 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:13.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353639373163383038626666333163323866626632666130646135 Jan 15 00:29:13.019000 audit: BPF prog-id=155 op=UNLOAD Jan 15 00:29:13.019000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:13.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353639373163383038626666333163323866626632666130646135 Jan 15 00:29:13.019000 audit: BPF prog-id=156 op=LOAD Jan 15 00:29:13.019000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3341 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:13.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353639373163383038626666333163323866626632666130646135 Jan 15 00:29:13.019000 audit: BPF prog-id=157 op=LOAD Jan 15 00:29:13.019000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3341 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:13.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353639373163383038626666333163323866626632666130646135 Jan 15 00:29:13.019000 audit: BPF prog-id=157 op=UNLOAD Jan 15 00:29:13.019000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:13.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353639373163383038626666333163323866626632666130646135 Jan 15 00:29:13.019000 audit: BPF prog-id=156 op=UNLOAD Jan 15 00:29:13.019000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:13.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353639373163383038626666333163323866626632666130646135 Jan 15 00:29:13.019000 audit: BPF prog-id=158 op=LOAD Jan 15 00:29:13.019000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3341 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:13.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353639373163383038626666333163323866626632666130646135 Jan 15 00:29:13.059094 containerd[1681]: time="2026-01-15T00:29:13.059014130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2kd2,Uid:caf1b669-c947-4a75-8149-0d4bf7b3f073,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e\"" Jan 15 00:29:13.060532 kubelet[2801]: E0115 00:29:13.060495 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:14.283679 containerd[1681]: time="2026-01-15T00:29:14.283542171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:14.285265 containerd[1681]: time="2026-01-15T00:29:14.285056417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:14.287336 containerd[1681]: time="2026-01-15T00:29:14.287228797Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:14.291083 containerd[1681]: time="2026-01-15T00:29:14.291004633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:14.291980 containerd[1681]: time="2026-01-15T00:29:14.291903822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.37782362s" Jan 15 00:29:14.292122 containerd[1681]: time="2026-01-15T00:29:14.292057569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 15 00:29:14.293474 containerd[1681]: time="2026-01-15T00:29:14.293318978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 00:29:14.316129 containerd[1681]: time="2026-01-15T00:29:14.314017715Z" level=info msg="CreateContainer within sandbox \"9bd208acc0509674d6f5cc28ea425d5ffdb1c04b12878a1306c7c554da66f6c6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 00:29:14.329629 containerd[1681]: time="2026-01-15T00:29:14.329483435Z" level=info msg="Container 62d43a249181aebbb9bfbbc70d5b6a13ae2638e27ae2f587e9fbc1c26b74a0a4: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:29:14.335485 kubelet[2801]: E0115 00:29:14.335250 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:14.344464 containerd[1681]: time="2026-01-15T00:29:14.344237244Z" level=info msg="CreateContainer within sandbox \"9bd208acc0509674d6f5cc28ea425d5ffdb1c04b12878a1306c7c554da66f6c6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62d43a249181aebbb9bfbbc70d5b6a13ae2638e27ae2f587e9fbc1c26b74a0a4\"" Jan 15 00:29:14.345168 containerd[1681]: time="2026-01-15T00:29:14.344973744Z" level=info msg="StartContainer for \"62d43a249181aebbb9bfbbc70d5b6a13ae2638e27ae2f587e9fbc1c26b74a0a4\"" Jan 15 00:29:14.347068 containerd[1681]: time="2026-01-15T00:29:14.347014429Z" level=info msg="connecting to shim 62d43a249181aebbb9bfbbc70d5b6a13ae2638e27ae2f587e9fbc1c26b74a0a4" address="unix:///run/containerd/s/5b988abd92a353e484dc71ee96bccc86f46c68d28fc34bf0f84087d7d69688bf" protocol=ttrpc version=3 Jan 15 00:29:14.397248 systemd[1]: Started cri-containerd-62d43a249181aebbb9bfbbc70d5b6a13ae2638e27ae2f587e9fbc1c26b74a0a4.scope - libcontainer container 62d43a249181aebbb9bfbbc70d5b6a13ae2638e27ae2f587e9fbc1c26b74a0a4. Jan 15 00:29:14.431000 audit: BPF prog-id=159 op=LOAD Jan 15 00:29:14.432000 audit: BPF prog-id=160 op=LOAD Jan 15 00:29:14.432000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3247 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:14.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632643433613234393138316165626262396266626263373064356236 Jan 15 00:29:14.432000 audit: BPF prog-id=160 op=UNLOAD Jan 15 00:29:14.432000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:14.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632643433613234393138316165626262396266626263373064356236 Jan 15 00:29:14.432000 audit: BPF prog-id=161 op=LOAD Jan 15 00:29:14.432000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3247 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:14.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632643433613234393138316165626262396266626263373064356236 Jan 15 00:29:14.432000 audit: BPF prog-id=162 op=LOAD Jan 15 00:29:14.432000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3247 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:14.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632643433613234393138316165626262396266626263373064356236 Jan 15 00:29:14.432000 audit: BPF prog-id=162 op=UNLOAD Jan 15 00:29:14.432000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:14.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632643433613234393138316165626262396266626263373064356236 Jan 15 00:29:14.432000 audit: BPF prog-id=161 op=UNLOAD Jan 15 00:29:14.432000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:14.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632643433613234393138316165626262396266626263373064356236 Jan 15 00:29:14.432000 audit: BPF prog-id=163 op=LOAD Jan 15 00:29:14.432000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3247 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:14.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632643433613234393138316165626262396266626263373064356236 Jan 15 00:29:14.503601 containerd[1681]: time="2026-01-15T00:29:14.503466905Z" level=info msg="StartContainer for \"62d43a249181aebbb9bfbbc70d5b6a13ae2638e27ae2f587e9fbc1c26b74a0a4\" returns successfully" Jan 15 00:29:15.459479 containerd[1681]: time="2026-01-15T00:29:15.459290590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:15.460735 containerd[1681]: time="2026-01-15T00:29:15.460657067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:15.462221 containerd[1681]: time="2026-01-15T00:29:15.462163511Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:15.464925 containerd[1681]: time="2026-01-15T00:29:15.464854554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:15.465466 containerd[1681]: time="2026-01-15T00:29:15.465334354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.171956977s" Jan 15 00:29:15.465466 containerd[1681]: time="2026-01-15T00:29:15.465382434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 15 00:29:15.467913 containerd[1681]: time="2026-01-15T00:29:15.467648039Z" level=info msg="CreateContainer within sandbox \"8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 00:29:15.482929 kubelet[2801]: E0115 00:29:15.482214 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:15.484342 containerd[1681]: time="2026-01-15T00:29:15.484199133Z" level=info msg="Container 4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:29:15.504955 containerd[1681]: time="2026-01-15T00:29:15.504891764Z" level=info msg="CreateContainer within sandbox \"8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246\"" Jan 15 00:29:15.506896 containerd[1681]: time="2026-01-15T00:29:15.506661641Z" level=info msg="StartContainer for \"4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246\"" Jan 15 00:29:15.509745 containerd[1681]: time="2026-01-15T00:29:15.509696980Z" level=info msg="connecting to shim 4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246" address="unix:///run/containerd/s/4122ec011e33fbc7bf9d5ec830e80a5fe361c2585b094da1688bc9d6a5ea6e14" protocol=ttrpc version=3 Jan 15 00:29:15.540843 kubelet[2801]: E0115 00:29:15.540665 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.540843 kubelet[2801]: W0115 00:29:15.540722 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.540843 kubelet[2801]: E0115 00:29:15.540756 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.541275 kubelet[2801]: E0115 00:29:15.541248 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.541275 kubelet[2801]: W0115 00:29:15.541267 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.541500 kubelet[2801]: E0115 00:29:15.541285 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.541701 kubelet[2801]: E0115 00:29:15.541664 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.541839 kubelet[2801]: W0115 00:29:15.541707 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.541839 kubelet[2801]: E0115 00:29:15.541724 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.542308 kubelet[2801]: E0115 00:29:15.542285 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.542308 kubelet[2801]: W0115 00:29:15.542303 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.542365 kubelet[2801]: E0115 00:29:15.542321 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.543271 kubelet[2801]: E0115 00:29:15.542851 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.543271 kubelet[2801]: W0115 00:29:15.542874 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.543271 kubelet[2801]: E0115 00:29:15.542892 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.542996 systemd[1]: Started cri-containerd-4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246.scope - libcontainer container 4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246. Jan 15 00:29:15.544147 kubelet[2801]: E0115 00:29:15.544057 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.544147 kubelet[2801]: W0115 00:29:15.544091 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.544147 kubelet[2801]: E0115 00:29:15.544148 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.544651 kubelet[2801]: E0115 00:29:15.544545 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.544690 kubelet[2801]: W0115 00:29:15.544654 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.544690 kubelet[2801]: E0115 00:29:15.544666 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.545457 kubelet[2801]: E0115 00:29:15.545286 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.545502 kubelet[2801]: W0115 00:29:15.545484 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.545502 kubelet[2801]: E0115 00:29:15.545497 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.546306 kubelet[2801]: E0115 00:29:15.546210 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.546350 kubelet[2801]: W0115 00:29:15.546317 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.546350 kubelet[2801]: E0115 00:29:15.546330 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.546949 kubelet[2801]: E0115 00:29:15.546920 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.546949 kubelet[2801]: W0115 00:29:15.546940 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.547023 kubelet[2801]: E0115 00:29:15.546954 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.547546 kubelet[2801]: E0115 00:29:15.547349 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.547546 kubelet[2801]: W0115 00:29:15.547488 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.547696 kubelet[2801]: E0115 00:29:15.547577 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.548202 kubelet[2801]: E0115 00:29:15.548131 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.548202 kubelet[2801]: W0115 00:29:15.548149 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.548202 kubelet[2801]: E0115 00:29:15.548164 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.549204 kubelet[2801]: E0115 00:29:15.549084 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.549204 kubelet[2801]: W0115 00:29:15.549103 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.549204 kubelet[2801]: E0115 00:29:15.549117 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.549737 kubelet[2801]: E0115 00:29:15.549638 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.549737 kubelet[2801]: W0115 00:29:15.549673 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.549737 kubelet[2801]: E0115 00:29:15.549684 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.550373 kubelet[2801]: E0115 00:29:15.550293 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.550450 kubelet[2801]: W0115 00:29:15.550419 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.550450 kubelet[2801]: E0115 00:29:15.550432 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.561552 kubelet[2801]: E0115 00:29:15.561364 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.561552 kubelet[2801]: W0115 00:29:15.561445 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.561552 kubelet[2801]: E0115 00:29:15.561471 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.562152 kubelet[2801]: E0115 00:29:15.562075 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.562152 kubelet[2801]: W0115 00:29:15.562125 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.562152 kubelet[2801]: E0115 00:29:15.562147 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.562842 kubelet[2801]: E0115 00:29:15.562740 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.562900 kubelet[2801]: W0115 00:29:15.562882 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.563280 kubelet[2801]: E0115 00:29:15.562955 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.563604 kubelet[2801]: E0115 00:29:15.563481 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.563604 kubelet[2801]: W0115 00:29:15.563527 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.563604 kubelet[2801]: E0115 00:29:15.563546 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.564000 kubelet[2801]: E0115 00:29:15.563958 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.564000 kubelet[2801]: W0115 00:29:15.563999 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.564177 kubelet[2801]: E0115 00:29:15.564061 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.564693 kubelet[2801]: E0115 00:29:15.564674 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.564737 kubelet[2801]: W0115 00:29:15.564693 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.565095 kubelet[2801]: E0115 00:29:15.564763 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.565278 kubelet[2801]: E0115 00:29:15.565238 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.565278 kubelet[2801]: W0115 00:29:15.565252 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.565341 kubelet[2801]: E0115 00:29:15.565318 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.565975 kubelet[2801]: E0115 00:29:15.565903 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.565975 kubelet[2801]: W0115 00:29:15.565950 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.566046 kubelet[2801]: E0115 00:29:15.566012 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.566650 kubelet[2801]: E0115 00:29:15.566575 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.566650 kubelet[2801]: W0115 00:29:15.566589 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.566986 kubelet[2801]: E0115 00:29:15.566660 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.567208 kubelet[2801]: E0115 00:29:15.567128 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.567208 kubelet[2801]: W0115 00:29:15.567178 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.567267 kubelet[2801]: E0115 00:29:15.567246 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.567947 kubelet[2801]: E0115 00:29:15.567860 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.567947 kubelet[2801]: W0115 00:29:15.567906 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.568025 kubelet[2801]: E0115 00:29:15.567975 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.569632 kubelet[2801]: E0115 00:29:15.569614 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.569632 kubelet[2801]: W0115 00:29:15.569632 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.570149 kubelet[2801]: E0115 00:29:15.569696 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.570301 kubelet[2801]: E0115 00:29:15.570266 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.570337 kubelet[2801]: W0115 00:29:15.570305 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.570511 kubelet[2801]: E0115 00:29:15.570372 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.570967 kubelet[2801]: E0115 00:29:15.570925 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.570967 kubelet[2801]: W0115 00:29:15.570947 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.571024 kubelet[2801]: E0115 00:29:15.571012 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.571555 kubelet[2801]: E0115 00:29:15.571534 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.571602 kubelet[2801]: W0115 00:29:15.571556 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.572139 kubelet[2801]: E0115 00:29:15.571633 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.572563 kubelet[2801]: E0115 00:29:15.572494 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.572563 kubelet[2801]: W0115 00:29:15.572513 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.572563 kubelet[2801]: E0115 00:29:15.572535 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.573482 kubelet[2801]: E0115 00:29:15.573449 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.573482 kubelet[2801]: W0115 00:29:15.573468 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.573553 kubelet[2801]: E0115 00:29:15.573532 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.574213 kubelet[2801]: E0115 00:29:15.574098 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:29:15.574213 kubelet[2801]: W0115 00:29:15.574112 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:29:15.574213 kubelet[2801]: E0115 00:29:15.574127 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:29:15.619000 audit: BPF prog-id=164 op=LOAD Jan 15 00:29:15.619000 audit[3459]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3341 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:15.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356538633863373337393961306663666566343235643131306163 Jan 15 00:29:15.619000 audit: BPF prog-id=165 op=LOAD Jan 15 00:29:15.619000 audit[3459]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3341 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:15.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356538633863373337393961306663666566343235643131306163 Jan 15 00:29:15.619000 audit: BPF prog-id=165 op=UNLOAD Jan 15 00:29:15.619000 audit[3459]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:15.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356538633863373337393961306663666566343235643131306163 Jan 15 00:29:15.619000 audit: BPF prog-id=164 op=UNLOAD Jan 15 00:29:15.619000 audit[3459]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:15.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356538633863373337393961306663666566343235643131306163 Jan 15 00:29:15.619000 audit: BPF prog-id=166 op=LOAD Jan 15 00:29:15.619000 audit[3459]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3341 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:15.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356538633863373337393961306663666566343235643131306163 Jan 15 00:29:15.648856 containerd[1681]: time="2026-01-15T00:29:15.648686756Z" level=info msg="StartContainer for \"4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246\" returns successfully" Jan 15 00:29:15.673737 systemd[1]: cri-containerd-4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246.scope: Deactivated successfully. Jan 15 00:29:15.677000 audit: BPF prog-id=166 op=UNLOAD Jan 15 00:29:15.678320 containerd[1681]: time="2026-01-15T00:29:15.678165744Z" level=info msg="received container exit event container_id:\"4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246\" id:\"4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246\" pid:3480 exited_at:{seconds:1768436955 nanos:677455049}" Jan 15 00:29:15.721637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c5e8c8c73799a0fcfef425d110ac09177cb7dbff844bfe935eaf3a4c6255246-rootfs.mount: Deactivated successfully. Jan 15 00:29:16.335878 kubelet[2801]: E0115 00:29:16.335278 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:16.491579 kubelet[2801]: I0115 00:29:16.491382 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 00:29:16.494489 kubelet[2801]: E0115 00:29:16.492309 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:16.494489 kubelet[2801]: E0115 00:29:16.493246 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:16.496189 containerd[1681]: time="2026-01-15T00:29:16.496033105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 00:29:16.517552 kubelet[2801]: I0115 00:29:16.516750 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59d74fbc6b-9gnpt" podStartSLOduration=3.137273171 podStartE2EDuration="4.516732516s" podCreationTimestamp="2026-01-15 00:29:12 +0000 UTC" firstStartedPulling="2026-01-15 00:29:12.913669489 +0000 UTC m=+24.698948591" lastFinishedPulling="2026-01-15 00:29:14.293128834 +0000 UTC m=+26.078407936" observedRunningTime="2026-01-15 00:29:15.503167323 +0000 UTC m=+27.288446425" watchObservedRunningTime="2026-01-15 00:29:16.516732516 +0000 UTC m=+28.302011618" Jan 15 00:29:18.336581 kubelet[2801]: E0115 00:29:18.336521 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:18.484925 containerd[1681]: time="2026-01-15T00:29:18.484768831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:18.487206 containerd[1681]: time="2026-01-15T00:29:18.487165680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 15 00:29:18.489264 containerd[1681]: time="2026-01-15T00:29:18.489178886Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:18.495105 containerd[1681]: time="2026-01-15T00:29:18.494549437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:18.495171 containerd[1681]: time="2026-01-15T00:29:18.495119936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.999010249s" Jan 15 00:29:18.495171 containerd[1681]: time="2026-01-15T00:29:18.495145103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 15 00:29:18.499248 containerd[1681]: time="2026-01-15T00:29:18.499222219Z" level=info msg="CreateContainer within sandbox \"8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 00:29:18.514465 containerd[1681]: time="2026-01-15T00:29:18.514383096Z" level=info msg="Container beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:29:18.525941 containerd[1681]: time="2026-01-15T00:29:18.525837992Z" level=info msg="CreateContainer within sandbox \"8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f\"" Jan 15 00:29:18.526687 containerd[1681]: time="2026-01-15T00:29:18.526658067Z" level=info msg="StartContainer for \"beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f\"" Jan 15 00:29:18.528234 containerd[1681]: time="2026-01-15T00:29:18.528208628Z" level=info msg="connecting to shim beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f" address="unix:///run/containerd/s/4122ec011e33fbc7bf9d5ec830e80a5fe361c2585b094da1688bc9d6a5ea6e14" protocol=ttrpc version=3 Jan 15 00:29:18.555099 systemd[1]: Started cri-containerd-beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f.scope - libcontainer container beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f. Jan 15 00:29:18.656000 audit: BPF prog-id=167 op=LOAD Jan 15 00:29:18.660732 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 15 00:29:18.660924 kernel: audit: type=1334 audit(1768436958.656:548): prog-id=167 op=LOAD Jan 15 00:29:18.656000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.676019 kernel: audit: type=1300 audit(1768436958.656:548): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.676092 kernel: audit: type=1327 audit(1768436958.656:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:18.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:18.656000 audit: BPF prog-id=168 op=LOAD Jan 15 00:29:18.693551 kernel: audit: type=1334 audit(1768436958.656:549): prog-id=168 op=LOAD Jan 15 00:29:18.693626 kernel: audit: type=1300 audit(1768436958.656:549): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.656000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.704942 kernel: audit: type=1327 audit(1768436958.656:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:18.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:18.656000 audit: BPF prog-id=168 op=UNLOAD Jan 15 00:29:18.720752 kernel: audit: type=1334 audit(1768436958.656:550): prog-id=168 op=UNLOAD Jan 15 00:29:18.720936 kernel: audit: type=1300 audit(1768436958.656:550): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.656000 audit[3553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.747562 kernel: audit: type=1327 audit(1768436958.656:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:18.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:18.747712 containerd[1681]: time="2026-01-15T00:29:18.745075168Z" level=info msg="StartContainer for \"beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f\" returns successfully" Jan 15 00:29:18.656000 audit: BPF prog-id=167 op=UNLOAD Jan 15 00:29:18.656000 audit[3553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:18.751908 kernel: audit: type=1334 audit(1768436958.656:551): prog-id=167 op=UNLOAD Jan 15 00:29:18.656000 audit: BPF prog-id=169 op=LOAD Jan 15 00:29:18.656000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3341 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:18.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656263636565643730653661663537323839343366396339353666 Jan 15 00:29:19.507050 kubelet[2801]: E0115 00:29:19.507017 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:19.586085 systemd[1]: cri-containerd-beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f.scope: Deactivated successfully. Jan 15 00:29:19.586493 systemd[1]: cri-containerd-beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f.scope: Consumed 940ms CPU time, 180.4M memory peak, 3.4M read from disk, 171.3M written to disk. Jan 15 00:29:19.589000 audit: BPF prog-id=169 op=UNLOAD Jan 15 00:29:19.611859 containerd[1681]: time="2026-01-15T00:29:19.611648267Z" level=info msg="received container exit event container_id:\"beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f\" id:\"beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f\" pid:3566 exited_at:{seconds:1768436959 nanos:593040443}" Jan 15 00:29:19.657244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-beebcceed70e6af5728943f9c956f949741c4eec646226f820ee3c54affcbc9f-rootfs.mount: Deactivated successfully. Jan 15 00:29:19.688131 kubelet[2801]: I0115 00:29:19.688013 2801 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 00:29:19.773967 systemd[1]: Created slice kubepods-burstable-poda4865d61_f273_4528_ab08_e1c0e1f3c4fa.slice - libcontainer container kubepods-burstable-poda4865d61_f273_4528_ab08_e1c0e1f3c4fa.slice. Jan 15 00:29:19.790383 systemd[1]: Created slice kubepods-besteffort-podaa219502_b65d_488f_aa83_975822920d6e.slice - libcontainer container kubepods-besteffort-podaa219502_b65d_488f_aa83_975822920d6e.slice. Jan 15 00:29:19.797392 kubelet[2801]: I0115 00:29:19.797269 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93a98979-ee0c-4100-b4c9-9eb82d024b13-config-volume\") pod \"coredns-668d6bf9bc-qsbfm\" (UID: \"93a98979-ee0c-4100-b4c9-9eb82d024b13\") " pod="kube-system/coredns-668d6bf9bc-qsbfm" Jan 15 00:29:19.798300 kubelet[2801]: I0115 00:29:19.798250 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0da1261a-c922-41a4-a50d-63daf18be31b-goldmane-ca-bundle\") pod \"goldmane-666569f655-9sr6x\" (UID: \"0da1261a-c922-41a4-a50d-63daf18be31b\") " pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:19.798379 kubelet[2801]: I0115 00:29:19.798291 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da1261a-c922-41a4-a50d-63daf18be31b-config\") pod \"goldmane-666569f655-9sr6x\" (UID: \"0da1261a-c922-41a4-a50d-63daf18be31b\") " pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:19.798379 kubelet[2801]: I0115 00:29:19.798332 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-ca-bundle\") pod \"whisker-7767b5448c-h5sht\" (UID: \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\") " pod="calico-system/whisker-7767b5448c-h5sht" Jan 15 00:29:19.798379 kubelet[2801]: I0115 00:29:19.798358 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lthl2\" (UniqueName: \"kubernetes.io/projected/a4865d61-f273-4528-ab08-e1c0e1f3c4fa-kube-api-access-lthl2\") pod \"coredns-668d6bf9bc-d7z9q\" (UID: \"a4865d61-f273-4528-ab08-e1c0e1f3c4fa\") " pod="kube-system/coredns-668d6bf9bc-d7z9q" Jan 15 00:29:19.798512 kubelet[2801]: I0115 00:29:19.798384 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b2ac921-bc9c-4449-a6c1-96910dec381e-calico-apiserver-certs\") pod \"calico-apiserver-77c9fcdc8c-f8t27\" (UID: \"4b2ac921-bc9c-4449-a6c1-96910dec381e\") " pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" Jan 15 00:29:19.798512 kubelet[2801]: I0115 00:29:19.798460 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpxxt\" (UniqueName: \"kubernetes.io/projected/6241b949-d82f-4e04-b2b8-fdb1cda43b39-kube-api-access-bpxxt\") pod \"calico-apiserver-6b544d645f-gznqk\" (UID: \"6241b949-d82f-4e04-b2b8-fdb1cda43b39\") " pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" Jan 15 00:29:19.798512 kubelet[2801]: I0115 00:29:19.798486 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0da1261a-c922-41a4-a50d-63daf18be31b-goldmane-key-pair\") pod \"goldmane-666569f655-9sr6x\" (UID: \"0da1261a-c922-41a4-a50d-63daf18be31b\") " pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:19.798512 kubelet[2801]: I0115 00:29:19.798509 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hkk\" (UniqueName: \"kubernetes.io/projected/ab27570c-5eb0-4b1f-9c2f-ecefc027b548-kube-api-access-54hkk\") pod \"calico-apiserver-77c9fcdc8c-wjfr8\" (UID: \"ab27570c-5eb0-4b1f-9c2f-ecefc027b548\") " pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" Jan 15 00:29:19.798606 kubelet[2801]: I0115 00:29:19.798532 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5qp\" (UniqueName: \"kubernetes.io/projected/0da1261a-c922-41a4-a50d-63daf18be31b-kube-api-access-bp5qp\") pod \"goldmane-666569f655-9sr6x\" (UID: \"0da1261a-c922-41a4-a50d-63daf18be31b\") " pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:19.798606 kubelet[2801]: I0115 00:29:19.798562 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-backend-key-pair\") pod \"whisker-7767b5448c-h5sht\" (UID: \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\") " pod="calico-system/whisker-7767b5448c-h5sht" Jan 15 00:29:19.798972 kubelet[2801]: I0115 00:29:19.798663 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976wz\" (UniqueName: \"kubernetes.io/projected/4b2ac921-bc9c-4449-a6c1-96910dec381e-kube-api-access-976wz\") pod \"calico-apiserver-77c9fcdc8c-f8t27\" (UID: \"4b2ac921-bc9c-4449-a6c1-96910dec381e\") " pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" Jan 15 00:29:19.798972 kubelet[2801]: I0115 00:29:19.798698 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6241b949-d82f-4e04-b2b8-fdb1cda43b39-calico-apiserver-certs\") pod \"calico-apiserver-6b544d645f-gznqk\" (UID: \"6241b949-d82f-4e04-b2b8-fdb1cda43b39\") " pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" Jan 15 00:29:19.798972 kubelet[2801]: I0115 00:29:19.798737 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl7dz\" (UniqueName: \"kubernetes.io/projected/aa219502-b65d-488f-aa83-975822920d6e-kube-api-access-hl7dz\") pod \"calico-kube-controllers-b75c74db4-kxw6j\" (UID: \"aa219502-b65d-488f-aa83-975822920d6e\") " pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" Jan 15 00:29:19.798972 kubelet[2801]: I0115 00:29:19.798762 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab27570c-5eb0-4b1f-9c2f-ecefc027b548-calico-apiserver-certs\") pod \"calico-apiserver-77c9fcdc8c-wjfr8\" (UID: \"ab27570c-5eb0-4b1f-9c2f-ecefc027b548\") " pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" Jan 15 00:29:19.801000 kubelet[2801]: I0115 00:29:19.800920 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa219502-b65d-488f-aa83-975822920d6e-tigera-ca-bundle\") pod \"calico-kube-controllers-b75c74db4-kxw6j\" (UID: \"aa219502-b65d-488f-aa83-975822920d6e\") " pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" Jan 15 00:29:19.801090 kubelet[2801]: I0115 00:29:19.801062 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgc9\" (UniqueName: \"kubernetes.io/projected/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-kube-api-access-ptgc9\") pod \"whisker-7767b5448c-h5sht\" (UID: \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\") " pod="calico-system/whisker-7767b5448c-h5sht" Jan 15 00:29:19.801128 kubelet[2801]: I0115 00:29:19.801099 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4865d61-f273-4528-ab08-e1c0e1f3c4fa-config-volume\") pod \"coredns-668d6bf9bc-d7z9q\" (UID: \"a4865d61-f273-4528-ab08-e1c0e1f3c4fa\") " pod="kube-system/coredns-668d6bf9bc-d7z9q" Jan 15 00:29:19.801159 kubelet[2801]: I0115 00:29:19.801124 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tmg\" (UniqueName: \"kubernetes.io/projected/93a98979-ee0c-4100-b4c9-9eb82d024b13-kube-api-access-p9tmg\") pod \"coredns-668d6bf9bc-qsbfm\" (UID: \"93a98979-ee0c-4100-b4c9-9eb82d024b13\") " pod="kube-system/coredns-668d6bf9bc-qsbfm" Jan 15 00:29:19.802222 systemd[1]: Created slice kubepods-burstable-pod93a98979_ee0c_4100_b4c9_9eb82d024b13.slice - libcontainer container kubepods-burstable-pod93a98979_ee0c_4100_b4c9_9eb82d024b13.slice. Jan 15 00:29:19.811849 systemd[1]: Created slice kubepods-besteffort-poddc018ed4_efb3_4ec1_94f0_dfe4d7ed1385.slice - libcontainer container kubepods-besteffort-poddc018ed4_efb3_4ec1_94f0_dfe4d7ed1385.slice. Jan 15 00:29:19.823921 systemd[1]: Created slice kubepods-besteffort-pod0da1261a_c922_41a4_a50d_63daf18be31b.slice - libcontainer container kubepods-besteffort-pod0da1261a_c922_41a4_a50d_63daf18be31b.slice. Jan 15 00:29:19.836708 systemd[1]: Created slice kubepods-besteffort-pod4b2ac921_bc9c_4449_a6c1_96910dec381e.slice - libcontainer container kubepods-besteffort-pod4b2ac921_bc9c_4449_a6c1_96910dec381e.slice. Jan 15 00:29:19.851381 systemd[1]: Created slice kubepods-besteffort-pod6241b949_d82f_4e04_b2b8_fdb1cda43b39.slice - libcontainer container kubepods-besteffort-pod6241b949_d82f_4e04_b2b8_fdb1cda43b39.slice. Jan 15 00:29:19.858052 systemd[1]: Created slice kubepods-besteffort-podab27570c_5eb0_4b1f_9c2f_ecefc027b548.slice - libcontainer container kubepods-besteffort-podab27570c_5eb0_4b1f_9c2f_ecefc027b548.slice. Jan 15 00:29:20.082737 kubelet[2801]: E0115 00:29:20.081219 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:20.083562 containerd[1681]: time="2026-01-15T00:29:20.082986866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7z9q,Uid:a4865d61-f273-4528-ab08-e1c0e1f3c4fa,Namespace:kube-system,Attempt:0,}" Jan 15 00:29:20.098393 containerd[1681]: time="2026-01-15T00:29:20.098305529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b75c74db4-kxw6j,Uid:aa219502-b65d-488f-aa83-975822920d6e,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:20.108226 kubelet[2801]: E0115 00:29:20.108122 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:20.115463 containerd[1681]: time="2026-01-15T00:29:20.115239383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qsbfm,Uid:93a98979-ee0c-4100-b4c9-9eb82d024b13,Namespace:kube-system,Attempt:0,}" Jan 15 00:29:20.118221 containerd[1681]: time="2026-01-15T00:29:20.118028685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7767b5448c-h5sht,Uid:dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:20.136880 containerd[1681]: time="2026-01-15T00:29:20.136741457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9sr6x,Uid:0da1261a-c922-41a4-a50d-63daf18be31b,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:20.144075 containerd[1681]: time="2026-01-15T00:29:20.143949512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-f8t27,Uid:4b2ac921-bc9c-4449-a6c1-96910dec381e,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:20.160652 containerd[1681]: time="2026-01-15T00:29:20.160606740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b544d645f-gznqk,Uid:6241b949-d82f-4e04-b2b8-fdb1cda43b39,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:20.166256 containerd[1681]: time="2026-01-15T00:29:20.165974412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-wjfr8,Uid:ab27570c-5eb0-4b1f-9c2f-ecefc027b548,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:20.346887 systemd[1]: Created slice kubepods-besteffort-pod44d6a0a4_cff1_4f36_aee4_f6ca9d02fb19.slice - libcontainer container kubepods-besteffort-pod44d6a0a4_cff1_4f36_aee4_f6ca9d02fb19.slice. Jan 15 00:29:20.352580 containerd[1681]: time="2026-01-15T00:29:20.352233199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9lkl,Uid:44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:20.393949 containerd[1681]: time="2026-01-15T00:29:20.393880635Z" level=error msg="Failed to destroy network for sandbox \"702140a8e080cf02a8805bd1e4665b9220a46b3cce3f20975083313452eb7042\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.398692 containerd[1681]: time="2026-01-15T00:29:20.398499930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qsbfm,Uid:93a98979-ee0c-4100-b4c9-9eb82d024b13,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"702140a8e080cf02a8805bd1e4665b9220a46b3cce3f20975083313452eb7042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.399342 kubelet[2801]: E0115 00:29:20.399166 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"702140a8e080cf02a8805bd1e4665b9220a46b3cce3f20975083313452eb7042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.399555 kubelet[2801]: E0115 00:29:20.399365 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"702140a8e080cf02a8805bd1e4665b9220a46b3cce3f20975083313452eb7042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qsbfm" Jan 15 00:29:20.399555 kubelet[2801]: E0115 00:29:20.399399 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"702140a8e080cf02a8805bd1e4665b9220a46b3cce3f20975083313452eb7042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qsbfm" Jan 15 00:29:20.399555 kubelet[2801]: E0115 00:29:20.399516 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qsbfm_kube-system(93a98979-ee0c-4100-b4c9-9eb82d024b13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qsbfm_kube-system(93a98979-ee0c-4100-b4c9-9eb82d024b13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"702140a8e080cf02a8805bd1e4665b9220a46b3cce3f20975083313452eb7042\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qsbfm" podUID="93a98979-ee0c-4100-b4c9-9eb82d024b13" Jan 15 00:29:20.406896 containerd[1681]: time="2026-01-15T00:29:20.406120086Z" level=error msg="Failed to destroy network for sandbox \"4cf30ed86715ac31d0e5cbcdb41330df5aa69e281a9ad91f3e8a184919a2e364\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.415759 containerd[1681]: time="2026-01-15T00:29:20.415640052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7z9q,Uid:a4865d61-f273-4528-ab08-e1c0e1f3c4fa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf30ed86715ac31d0e5cbcdb41330df5aa69e281a9ad91f3e8a184919a2e364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.416877 kubelet[2801]: E0115 00:29:20.416712 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf30ed86715ac31d0e5cbcdb41330df5aa69e281a9ad91f3e8a184919a2e364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.416961 kubelet[2801]: E0115 00:29:20.416917 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf30ed86715ac31d0e5cbcdb41330df5aa69e281a9ad91f3e8a184919a2e364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d7z9q" Jan 15 00:29:20.416961 kubelet[2801]: E0115 00:29:20.416952 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf30ed86715ac31d0e5cbcdb41330df5aa69e281a9ad91f3e8a184919a2e364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d7z9q" Jan 15 00:29:20.417169 kubelet[2801]: E0115 00:29:20.417018 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d7z9q_kube-system(a4865d61-f273-4528-ab08-e1c0e1f3c4fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d7z9q_kube-system(a4865d61-f273-4528-ab08-e1c0e1f3c4fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cf30ed86715ac31d0e5cbcdb41330df5aa69e281a9ad91f3e8a184919a2e364\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d7z9q" podUID="a4865d61-f273-4528-ab08-e1c0e1f3c4fa" Jan 15 00:29:20.442744 containerd[1681]: time="2026-01-15T00:29:20.442586271Z" level=error msg="Failed to destroy network for sandbox \"8fd4988c2b7c82e80fe3df40f3c7fefa1d1020cc61ebf8e634c1b4a9a7a1872c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.448996 containerd[1681]: time="2026-01-15T00:29:20.448639832Z" level=error msg="Failed to destroy network for sandbox \"e660af5fe71321fe841444a8af07009f5f1850d8ae2abeb5e15e37325c350d92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.451555 containerd[1681]: time="2026-01-15T00:29:20.451076427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b75c74db4-kxw6j,Uid:aa219502-b65d-488f-aa83-975822920d6e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd4988c2b7c82e80fe3df40f3c7fefa1d1020cc61ebf8e634c1b4a9a7a1872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.454887 kubelet[2801]: E0115 00:29:20.453364 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd4988c2b7c82e80fe3df40f3c7fefa1d1020cc61ebf8e634c1b4a9a7a1872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.454887 kubelet[2801]: E0115 00:29:20.453701 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd4988c2b7c82e80fe3df40f3c7fefa1d1020cc61ebf8e634c1b4a9a7a1872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" Jan 15 00:29:20.454887 kubelet[2801]: E0115 00:29:20.454250 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd4988c2b7c82e80fe3df40f3c7fefa1d1020cc61ebf8e634c1b4a9a7a1872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" Jan 15 00:29:20.455075 kubelet[2801]: E0115 00:29:20.454636 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b75c74db4-kxw6j_calico-system(aa219502-b65d-488f-aa83-975822920d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b75c74db4-kxw6j_calico-system(aa219502-b65d-488f-aa83-975822920d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fd4988c2b7c82e80fe3df40f3c7fefa1d1020cc61ebf8e634c1b4a9a7a1872c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" podUID="aa219502-b65d-488f-aa83-975822920d6e" Jan 15 00:29:20.457578 containerd[1681]: time="2026-01-15T00:29:20.457534181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b544d645f-gznqk,Uid:6241b949-d82f-4e04-b2b8-fdb1cda43b39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e660af5fe71321fe841444a8af07009f5f1850d8ae2abeb5e15e37325c350d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.458075 containerd[1681]: time="2026-01-15T00:29:20.458041848Z" level=error msg="Failed to destroy network for sandbox \"7a42712bd0845d9b7da0a6188d55949853b127eccd66e589dfacebcc147034a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.461380 kubelet[2801]: E0115 00:29:20.461253 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e660af5fe71321fe841444a8af07009f5f1850d8ae2abeb5e15e37325c350d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.461520 kubelet[2801]: E0115 00:29:20.461384 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e660af5fe71321fe841444a8af07009f5f1850d8ae2abeb5e15e37325c350d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" Jan 15 00:29:20.461520 kubelet[2801]: E0115 00:29:20.461468 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e660af5fe71321fe841444a8af07009f5f1850d8ae2abeb5e15e37325c350d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" Jan 15 00:29:20.461614 kubelet[2801]: E0115 00:29:20.461521 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b544d645f-gznqk_calico-apiserver(6241b949-d82f-4e04-b2b8-fdb1cda43b39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b544d645f-gznqk_calico-apiserver(6241b949-d82f-4e04-b2b8-fdb1cda43b39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e660af5fe71321fe841444a8af07009f5f1850d8ae2abeb5e15e37325c350d92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:29:20.463367 containerd[1681]: time="2026-01-15T00:29:20.463293180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7767b5448c-h5sht,Uid:dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a42712bd0845d9b7da0a6188d55949853b127eccd66e589dfacebcc147034a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.464125 kubelet[2801]: E0115 00:29:20.463909 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a42712bd0845d9b7da0a6188d55949853b127eccd66e589dfacebcc147034a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.464202 kubelet[2801]: E0115 00:29:20.464148 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a42712bd0845d9b7da0a6188d55949853b127eccd66e589dfacebcc147034a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7767b5448c-h5sht" Jan 15 00:29:20.464202 kubelet[2801]: E0115 00:29:20.464186 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a42712bd0845d9b7da0a6188d55949853b127eccd66e589dfacebcc147034a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7767b5448c-h5sht" Jan 15 00:29:20.464293 kubelet[2801]: E0115 00:29:20.464244 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7767b5448c-h5sht_calico-system(dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7767b5448c-h5sht_calico-system(dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a42712bd0845d9b7da0a6188d55949853b127eccd66e589dfacebcc147034a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7767b5448c-h5sht" podUID="dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385" Jan 15 00:29:20.470307 containerd[1681]: time="2026-01-15T00:29:20.470173517Z" level=error msg="Failed to destroy network for sandbox \"5fd1d0167d7b028c69ed38fb6717774148c19638060060648a3651112feb3d98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.470497 containerd[1681]: time="2026-01-15T00:29:20.470282215Z" level=error msg="Failed to destroy network for sandbox \"cbc4c875a406f7bcdbe5a4c23f9a495fd7a51baaac8fbcffea01a5f9e4363cf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.477913 containerd[1681]: time="2026-01-15T00:29:20.477690348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9sr6x,Uid:0da1261a-c922-41a4-a50d-63daf18be31b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd1d0167d7b028c69ed38fb6717774148c19638060060648a3651112feb3d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.478522 kubelet[2801]: E0115 00:29:20.478368 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd1d0167d7b028c69ed38fb6717774148c19638060060648a3651112feb3d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.478522 kubelet[2801]: E0115 00:29:20.478484 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd1d0167d7b028c69ed38fb6717774148c19638060060648a3651112feb3d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:20.478522 kubelet[2801]: E0115 00:29:20.478518 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd1d0167d7b028c69ed38fb6717774148c19638060060648a3651112feb3d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:20.479325 kubelet[2801]: E0115 00:29:20.479121 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9sr6x_calico-system(0da1261a-c922-41a4-a50d-63daf18be31b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9sr6x_calico-system(0da1261a-c922-41a4-a50d-63daf18be31b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fd1d0167d7b028c69ed38fb6717774148c19638060060648a3651112feb3d98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:29:20.482287 containerd[1681]: time="2026-01-15T00:29:20.482098294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-f8t27,Uid:4b2ac921-bc9c-4449-a6c1-96910dec381e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbc4c875a406f7bcdbe5a4c23f9a495fd7a51baaac8fbcffea01a5f9e4363cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.483672 kubelet[2801]: E0115 00:29:20.483513 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbc4c875a406f7bcdbe5a4c23f9a495fd7a51baaac8fbcffea01a5f9e4363cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.484296 kubelet[2801]: E0115 00:29:20.484178 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbc4c875a406f7bcdbe5a4c23f9a495fd7a51baaac8fbcffea01a5f9e4363cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" Jan 15 00:29:20.484634 kubelet[2801]: E0115 00:29:20.484613 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbc4c875a406f7bcdbe5a4c23f9a495fd7a51baaac8fbcffea01a5f9e4363cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" Jan 15 00:29:20.485152 kubelet[2801]: E0115 00:29:20.485118 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c9fcdc8c-f8t27_calico-apiserver(4b2ac921-bc9c-4449-a6c1-96910dec381e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c9fcdc8c-f8t27_calico-apiserver(4b2ac921-bc9c-4449-a6c1-96910dec381e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbc4c875a406f7bcdbe5a4c23f9a495fd7a51baaac8fbcffea01a5f9e4363cf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" podUID="4b2ac921-bc9c-4449-a6c1-96910dec381e" Jan 15 00:29:20.512697 containerd[1681]: time="2026-01-15T00:29:20.512650249Z" level=error msg="Failed to destroy network for sandbox \"f9dfd25f6f406f3a4532427f788ee5ce1a7e954219a05016e1e84c9f1c0c7729\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.517126 kubelet[2801]: E0115 00:29:20.517056 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:20.517595 containerd[1681]: time="2026-01-15T00:29:20.517568332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-wjfr8,Uid:ab27570c-5eb0-4b1f-9c2f-ecefc027b548,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9dfd25f6f406f3a4532427f788ee5ce1a7e954219a05016e1e84c9f1c0c7729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.518348 kubelet[2801]: E0115 00:29:20.518292 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9dfd25f6f406f3a4532427f788ee5ce1a7e954219a05016e1e84c9f1c0c7729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.518651 kubelet[2801]: E0115 00:29:20.518551 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9dfd25f6f406f3a4532427f788ee5ce1a7e954219a05016e1e84c9f1c0c7729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" Jan 15 00:29:20.518842 kubelet[2801]: E0115 00:29:20.518820 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9dfd25f6f406f3a4532427f788ee5ce1a7e954219a05016e1e84c9f1c0c7729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" Jan 15 00:29:20.518975 kubelet[2801]: E0115 00:29:20.518952 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c9fcdc8c-wjfr8_calico-apiserver(ab27570c-5eb0-4b1f-9c2f-ecefc027b548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c9fcdc8c-wjfr8_calico-apiserver(ab27570c-5eb0-4b1f-9c2f-ecefc027b548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9dfd25f6f406f3a4532427f788ee5ce1a7e954219a05016e1e84c9f1c0c7729\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:29:20.521253 containerd[1681]: time="2026-01-15T00:29:20.520908215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 00:29:20.533025 containerd[1681]: time="2026-01-15T00:29:20.531288574Z" level=error msg="Failed to destroy network for sandbox \"77eb7d77b5a6a5e7aea60a3b61d96081379f06f356e3ed8cced8f8eca5bc19c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.537736 containerd[1681]: time="2026-01-15T00:29:20.537638784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9lkl,Uid:44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77eb7d77b5a6a5e7aea60a3b61d96081379f06f356e3ed8cced8f8eca5bc19c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.537991 kubelet[2801]: E0115 00:29:20.537949 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77eb7d77b5a6a5e7aea60a3b61d96081379f06f356e3ed8cced8f8eca5bc19c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:20.538063 kubelet[2801]: E0115 00:29:20.537990 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77eb7d77b5a6a5e7aea60a3b61d96081379f06f356e3ed8cced8f8eca5bc19c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:20.538063 kubelet[2801]: E0115 00:29:20.538008 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77eb7d77b5a6a5e7aea60a3b61d96081379f06f356e3ed8cced8f8eca5bc19c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:20.538178 kubelet[2801]: E0115 00:29:20.538119 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77eb7d77b5a6a5e7aea60a3b61d96081379f06f356e3ed8cced8f8eca5bc19c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:25.202318 kubelet[2801]: I0115 00:29:25.198318 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 00:29:25.994992 kubelet[2801]: E0115 00:29:25.994644 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:27.866634 kubelet[2801]: E0115 00:29:27.866240 2801 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.321s" Jan 15 00:29:28.072897 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 15 00:29:28.073860 kernel: audit: type=1325 audit(1768436968.062:554): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:28.062000 audit[3909]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:28.062000 audit[3909]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8dd95380 a2=0 a3=7ffc8dd9536c items=0 ppid=2911 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:28.135214 kernel: audit: type=1300 audit(1768436968.062:554): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8dd95380 a2=0 a3=7ffc8dd9536c items=0 ppid=2911 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:28.135463 kernel: audit: type=1327 audit(1768436968.062:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:28.062000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:28.094000 audit[3909]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:28.190519 kernel: audit: type=1325 audit(1768436968.094:555): table=nat:118 family=2 entries=19 op=nft_register_chain pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:28.194903 kernel: audit: type=1300 audit(1768436968.094:555): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc8dd95380 a2=0 a3=7ffc8dd9536c items=0 ppid=2911 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:28.094000 audit[3909]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc8dd95380 a2=0 a3=7ffc8dd9536c items=0 ppid=2911 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:28.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:28.273159 kernel: audit: type=1327 audit(1768436968.094:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:28.882592 kubelet[2801]: E0115 00:29:28.882403 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:31.336735 kubelet[2801]: E0115 00:29:31.336652 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:31.338700 containerd[1681]: time="2026-01-15T00:29:31.337598125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-wjfr8,Uid:ab27570c-5eb0-4b1f-9c2f-ecefc027b548,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:31.339728 containerd[1681]: time="2026-01-15T00:29:31.338884603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7767b5448c-h5sht,Uid:dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:31.339983 containerd[1681]: time="2026-01-15T00:29:31.338117622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-f8t27,Uid:4b2ac921-bc9c-4449-a6c1-96910dec381e,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:31.339983 containerd[1681]: time="2026-01-15T00:29:31.337598704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7z9q,Uid:a4865d61-f273-4528-ab08-e1c0e1f3c4fa,Namespace:kube-system,Attempt:0,}" Jan 15 00:29:31.531533 containerd[1681]: time="2026-01-15T00:29:31.529552138Z" level=error msg="Failed to destroy network for sandbox \"8d1ea6a93213c8c0a97bd630ae209372938fdd6ce70b1fb377b6038d29d47de3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.534192 containerd[1681]: time="2026-01-15T00:29:31.534050317Z" level=error msg="Failed to destroy network for sandbox \"2e553837fef1f0eddbe0ab0d2093a95a67ef5d51594662be4178a982b56b6e3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.539006 containerd[1681]: time="2026-01-15T00:29:31.538864881Z" level=error msg="Failed to destroy network for sandbox \"c3cc9e1661063ef260f4e932fe5823c42beb252750be5c6d548cc298de97ab2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.539727 containerd[1681]: time="2026-01-15T00:29:31.539620812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-wjfr8,Uid:ab27570c-5eb0-4b1f-9c2f-ecefc027b548,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1ea6a93213c8c0a97bd630ae209372938fdd6ce70b1fb377b6038d29d47de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.541549 kubelet[2801]: E0115 00:29:31.541374 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1ea6a93213c8c0a97bd630ae209372938fdd6ce70b1fb377b6038d29d47de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.541671 kubelet[2801]: E0115 00:29:31.541598 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1ea6a93213c8c0a97bd630ae209372938fdd6ce70b1fb377b6038d29d47de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" Jan 15 00:29:31.541892 kubelet[2801]: E0115 00:29:31.541722 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1ea6a93213c8c0a97bd630ae209372938fdd6ce70b1fb377b6038d29d47de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" Jan 15 00:29:31.542314 kubelet[2801]: E0115 00:29:31.542210 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c9fcdc8c-wjfr8_calico-apiserver(ab27570c-5eb0-4b1f-9c2f-ecefc027b548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c9fcdc8c-wjfr8_calico-apiserver(ab27570c-5eb0-4b1f-9c2f-ecefc027b548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d1ea6a93213c8c0a97bd630ae209372938fdd6ce70b1fb377b6038d29d47de3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:29:31.543015 containerd[1681]: time="2026-01-15T00:29:31.542766105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7767b5448c-h5sht,Uid:dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e553837fef1f0eddbe0ab0d2093a95a67ef5d51594662be4178a982b56b6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.544287 kubelet[2801]: E0115 00:29:31.544198 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e553837fef1f0eddbe0ab0d2093a95a67ef5d51594662be4178a982b56b6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.544287 kubelet[2801]: E0115 00:29:31.544247 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e553837fef1f0eddbe0ab0d2093a95a67ef5d51594662be4178a982b56b6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7767b5448c-h5sht" Jan 15 00:29:31.544287 kubelet[2801]: E0115 00:29:31.544274 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e553837fef1f0eddbe0ab0d2093a95a67ef5d51594662be4178a982b56b6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7767b5448c-h5sht" Jan 15 00:29:31.544566 kubelet[2801]: E0115 00:29:31.544315 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7767b5448c-h5sht_calico-system(dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7767b5448c-h5sht_calico-system(dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e553837fef1f0eddbe0ab0d2093a95a67ef5d51594662be4178a982b56b6e3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7767b5448c-h5sht" podUID="dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385" Jan 15 00:29:31.550604 containerd[1681]: time="2026-01-15T00:29:31.550408392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7z9q,Uid:a4865d61-f273-4528-ab08-e1c0e1f3c4fa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3cc9e1661063ef260f4e932fe5823c42beb252750be5c6d548cc298de97ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.550875 kubelet[2801]: E0115 00:29:31.550703 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3cc9e1661063ef260f4e932fe5823c42beb252750be5c6d548cc298de97ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.550875 kubelet[2801]: E0115 00:29:31.550752 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3cc9e1661063ef260f4e932fe5823c42beb252750be5c6d548cc298de97ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d7z9q" Jan 15 00:29:31.550875 kubelet[2801]: E0115 00:29:31.550862 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3cc9e1661063ef260f4e932fe5823c42beb252750be5c6d548cc298de97ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d7z9q" Jan 15 00:29:31.551167 kubelet[2801]: E0115 00:29:31.550908 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d7z9q_kube-system(a4865d61-f273-4528-ab08-e1c0e1f3c4fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d7z9q_kube-system(a4865d61-f273-4528-ab08-e1c0e1f3c4fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3cc9e1661063ef260f4e932fe5823c42beb252750be5c6d548cc298de97ab2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d7z9q" podUID="a4865d61-f273-4528-ab08-e1c0e1f3c4fa" Jan 15 00:29:31.556549 containerd[1681]: time="2026-01-15T00:29:31.556414060Z" level=error msg="Failed to destroy network for sandbox \"ccfcb5a8ddfce527dc14795289f453709e0b14972a3370cd7199b9c0d345e7bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.560641 containerd[1681]: time="2026-01-15T00:29:31.560541346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-f8t27,Uid:4b2ac921-bc9c-4449-a6c1-96910dec381e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfcb5a8ddfce527dc14795289f453709e0b14972a3370cd7199b9c0d345e7bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.561104 kubelet[2801]: E0115 00:29:31.560862 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfcb5a8ddfce527dc14795289f453709e0b14972a3370cd7199b9c0d345e7bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:31.561104 kubelet[2801]: E0115 00:29:31.560915 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfcb5a8ddfce527dc14795289f453709e0b14972a3370cd7199b9c0d345e7bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" Jan 15 00:29:31.561104 kubelet[2801]: E0115 00:29:31.561095 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfcb5a8ddfce527dc14795289f453709e0b14972a3370cd7199b9c0d345e7bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" Jan 15 00:29:31.561602 kubelet[2801]: E0115 00:29:31.561171 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c9fcdc8c-f8t27_calico-apiserver(4b2ac921-bc9c-4449-a6c1-96910dec381e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c9fcdc8c-f8t27_calico-apiserver(4b2ac921-bc9c-4449-a6c1-96910dec381e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccfcb5a8ddfce527dc14795289f453709e0b14972a3370cd7199b9c0d345e7bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" podUID="4b2ac921-bc9c-4449-a6c1-96910dec381e" Jan 15 00:29:32.232659 systemd[1]: run-netns-cni\x2da2e63a31\x2de4fe\x2d3634\x2d0e6e\x2df5a08f050a5f.mount: Deactivated successfully. Jan 15 00:29:32.234251 systemd[1]: run-netns-cni\x2d38b95006\x2d463d\x2d1982\x2d972b\x2da5e498625efa.mount: Deactivated successfully. Jan 15 00:29:32.234389 systemd[1]: run-netns-cni\x2d4378f80b\x2d604e\x2dd804\x2de0d5\x2d50f5b04adcef.mount: Deactivated successfully. Jan 15 00:29:32.234533 systemd[1]: run-netns-cni\x2dc72c8b50\x2dce7c\x2d9362\x2de08c\x2d822698380041.mount: Deactivated successfully. Jan 15 00:29:33.336132 containerd[1681]: time="2026-01-15T00:29:33.335612382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b75c74db4-kxw6j,Uid:aa219502-b65d-488f-aa83-975822920d6e,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:33.497413 containerd[1681]: time="2026-01-15T00:29:33.497251414Z" level=error msg="Failed to destroy network for sandbox \"83439719f7d61b3355495245dcf98e738e6eadd53ddda89c36df98fd75996a5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:33.501979 systemd[1]: run-netns-cni\x2dcd17c767\x2d2376\x2df5d6\x2d563b\x2d297f7df6bd01.mount: Deactivated successfully. Jan 15 00:29:33.573667 containerd[1681]: time="2026-01-15T00:29:33.573257661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b75c74db4-kxw6j,Uid:aa219502-b65d-488f-aa83-975822920d6e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83439719f7d61b3355495245dcf98e738e6eadd53ddda89c36df98fd75996a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:33.576721 kubelet[2801]: E0115 00:29:33.576648 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83439719f7d61b3355495245dcf98e738e6eadd53ddda89c36df98fd75996a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:33.578378 kubelet[2801]: E0115 00:29:33.578073 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83439719f7d61b3355495245dcf98e738e6eadd53ddda89c36df98fd75996a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" Jan 15 00:29:33.578378 kubelet[2801]: E0115 00:29:33.578238 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83439719f7d61b3355495245dcf98e738e6eadd53ddda89c36df98fd75996a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" Jan 15 00:29:33.579623 kubelet[2801]: E0115 00:29:33.579560 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b75c74db4-kxw6j_calico-system(aa219502-b65d-488f-aa83-975822920d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b75c74db4-kxw6j_calico-system(aa219502-b65d-488f-aa83-975822920d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83439719f7d61b3355495245dcf98e738e6eadd53ddda89c36df98fd75996a5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" podUID="aa219502-b65d-488f-aa83-975822920d6e" Jan 15 00:29:34.335635 kubelet[2801]: E0115 00:29:34.335549 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:34.339642 containerd[1681]: time="2026-01-15T00:29:34.339374573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9lkl,Uid:44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:34.341669 containerd[1681]: time="2026-01-15T00:29:34.341576734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qsbfm,Uid:93a98979-ee0c-4100-b4c9-9eb82d024b13,Namespace:kube-system,Attempt:0,}" Jan 15 00:29:34.507137 containerd[1681]: time="2026-01-15T00:29:34.506936395Z" level=error msg="Failed to destroy network for sandbox \"73e364d8e48093514da7f38af798c7678242b4ffb6371936a235a60fbec15913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:34.510678 systemd[1]: run-netns-cni\x2d30907ebe\x2dd679\x2d5d50\x2d4826\x2d92925e2f32af.mount: Deactivated successfully. Jan 15 00:29:34.514488 containerd[1681]: time="2026-01-15T00:29:34.514375658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qsbfm,Uid:93a98979-ee0c-4100-b4c9-9eb82d024b13,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e364d8e48093514da7f38af798c7678242b4ffb6371936a235a60fbec15913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:34.515222 kubelet[2801]: E0115 00:29:34.515162 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e364d8e48093514da7f38af798c7678242b4ffb6371936a235a60fbec15913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:34.515850 kubelet[2801]: E0115 00:29:34.515602 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e364d8e48093514da7f38af798c7678242b4ffb6371936a235a60fbec15913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qsbfm" Jan 15 00:29:34.515850 kubelet[2801]: E0115 00:29:34.515710 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e364d8e48093514da7f38af798c7678242b4ffb6371936a235a60fbec15913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qsbfm" Jan 15 00:29:34.516145 kubelet[2801]: E0115 00:29:34.516007 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qsbfm_kube-system(93a98979-ee0c-4100-b4c9-9eb82d024b13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qsbfm_kube-system(93a98979-ee0c-4100-b4c9-9eb82d024b13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73e364d8e48093514da7f38af798c7678242b4ffb6371936a235a60fbec15913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qsbfm" podUID="93a98979-ee0c-4100-b4c9-9eb82d024b13" Jan 15 00:29:34.538231 containerd[1681]: time="2026-01-15T00:29:34.538121875Z" level=error msg="Failed to destroy network for sandbox \"b4bb12506d1a2d7ee8b9491df2c8f38514c67aae9fb7ef781f1940384903fd9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:34.541067 systemd[1]: run-netns-cni\x2d31e92515\x2de031\x2d82a3\x2de69e\x2df438066cb937.mount: Deactivated successfully. Jan 15 00:29:34.544016 containerd[1681]: time="2026-01-15T00:29:34.543650533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9lkl,Uid:44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4bb12506d1a2d7ee8b9491df2c8f38514c67aae9fb7ef781f1940384903fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:34.544761 kubelet[2801]: E0115 00:29:34.544730 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4bb12506d1a2d7ee8b9491df2c8f38514c67aae9fb7ef781f1940384903fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:34.545250 kubelet[2801]: E0115 00:29:34.545022 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4bb12506d1a2d7ee8b9491df2c8f38514c67aae9fb7ef781f1940384903fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:34.545250 kubelet[2801]: E0115 00:29:34.545131 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4bb12506d1a2d7ee8b9491df2c8f38514c67aae9fb7ef781f1940384903fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9lkl" Jan 15 00:29:34.545599 kubelet[2801]: E0115 00:29:34.545433 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4bb12506d1a2d7ee8b9491df2c8f38514c67aae9fb7ef781f1940384903fd9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:35.194126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056122168.mount: Deactivated successfully. Jan 15 00:29:35.321576 containerd[1681]: time="2026-01-15T00:29:35.321413914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:35.322558 containerd[1681]: time="2026-01-15T00:29:35.322489399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 15 00:29:35.325443 containerd[1681]: time="2026-01-15T00:29:35.325364597Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:35.328030 containerd[1681]: time="2026-01-15T00:29:35.327938227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:29:35.329084 containerd[1681]: time="2026-01-15T00:29:35.328994091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 14.807770397s" Jan 15 00:29:35.329197 containerd[1681]: time="2026-01-15T00:29:35.329132979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 15 00:29:35.335947 containerd[1681]: time="2026-01-15T00:29:35.335718018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9sr6x,Uid:0da1261a-c922-41a4-a50d-63daf18be31b,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:35.337049 containerd[1681]: time="2026-01-15T00:29:35.336759951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b544d645f-gznqk,Uid:6241b949-d82f-4e04-b2b8-fdb1cda43b39,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:35.363940 containerd[1681]: time="2026-01-15T00:29:35.363900961Z" level=info msg="CreateContainer within sandbox \"8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 00:29:35.416412 containerd[1681]: time="2026-01-15T00:29:35.416249800Z" level=info msg="Container ea9fde5f822e119c850dc7abf40193f0d4fca31ca693755afc7d18dfe1452daf: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:29:35.443666 containerd[1681]: time="2026-01-15T00:29:35.443397740Z" level=info msg="CreateContainer within sandbox \"8a56971c808bff31c28fbf2fa0da5d43b9636263310a9abf94c55a7eaada999e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ea9fde5f822e119c850dc7abf40193f0d4fca31ca693755afc7d18dfe1452daf\"" Jan 15 00:29:35.451057 containerd[1681]: time="2026-01-15T00:29:35.448667836Z" level=info msg="StartContainer for \"ea9fde5f822e119c850dc7abf40193f0d4fca31ca693755afc7d18dfe1452daf\"" Jan 15 00:29:35.464269 containerd[1681]: time="2026-01-15T00:29:35.464178881Z" level=error msg="Failed to destroy network for sandbox \"f578c5b353d0992ab1b58d7713f244f3002d8f10597f38c1c18b7c0fc9b25fe5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:35.469010 containerd[1681]: time="2026-01-15T00:29:35.468961489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b544d645f-gznqk,Uid:6241b949-d82f-4e04-b2b8-fdb1cda43b39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f578c5b353d0992ab1b58d7713f244f3002d8f10597f38c1c18b7c0fc9b25fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:35.469596 systemd[1]: run-netns-cni\x2d3ea09fea\x2d1508\x2ddc59\x2d9d1d\x2deb8d8b036a44.mount: Deactivated successfully. Jan 15 00:29:35.470171 containerd[1681]: time="2026-01-15T00:29:35.469995872Z" level=info msg="connecting to shim ea9fde5f822e119c850dc7abf40193f0d4fca31ca693755afc7d18dfe1452daf" address="unix:///run/containerd/s/4122ec011e33fbc7bf9d5ec830e80a5fe361c2585b094da1688bc9d6a5ea6e14" protocol=ttrpc version=3 Jan 15 00:29:35.471536 kubelet[2801]: E0115 00:29:35.470295 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f578c5b353d0992ab1b58d7713f244f3002d8f10597f38c1c18b7c0fc9b25fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:35.474236 kubelet[2801]: E0115 00:29:35.473435 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f578c5b353d0992ab1b58d7713f244f3002d8f10597f38c1c18b7c0fc9b25fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" Jan 15 00:29:35.474236 kubelet[2801]: E0115 00:29:35.473547 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f578c5b353d0992ab1b58d7713f244f3002d8f10597f38c1c18b7c0fc9b25fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" Jan 15 00:29:35.474236 kubelet[2801]: E0115 00:29:35.473926 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b544d645f-gznqk_calico-apiserver(6241b949-d82f-4e04-b2b8-fdb1cda43b39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b544d645f-gznqk_calico-apiserver(6241b949-d82f-4e04-b2b8-fdb1cda43b39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f578c5b353d0992ab1b58d7713f244f3002d8f10597f38c1c18b7c0fc9b25fe5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:29:35.478313 containerd[1681]: time="2026-01-15T00:29:35.478176638Z" level=error msg="Failed to destroy network for sandbox \"5819513d62c6e673cd2b619172c1227dd610e1bedb41f2bc32145c4540a78ebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:35.480977 systemd[1]: run-netns-cni\x2dfa1e2e21\x2db272\x2d512a\x2df740\x2d211e9e286a70.mount: Deactivated successfully. Jan 15 00:29:35.484708 containerd[1681]: time="2026-01-15T00:29:35.484537929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9sr6x,Uid:0da1261a-c922-41a4-a50d-63daf18be31b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5819513d62c6e673cd2b619172c1227dd610e1bedb41f2bc32145c4540a78ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:35.485292 kubelet[2801]: E0115 00:29:35.485077 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5819513d62c6e673cd2b619172c1227dd610e1bedb41f2bc32145c4540a78ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:29:35.485292 kubelet[2801]: E0115 00:29:35.485272 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5819513d62c6e673cd2b619172c1227dd610e1bedb41f2bc32145c4540a78ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:35.485524 kubelet[2801]: E0115 00:29:35.485302 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5819513d62c6e673cd2b619172c1227dd610e1bedb41f2bc32145c4540a78ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9sr6x" Jan 15 00:29:35.485524 kubelet[2801]: E0115 00:29:35.485392 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9sr6x_calico-system(0da1261a-c922-41a4-a50d-63daf18be31b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9sr6x_calico-system(0da1261a-c922-41a4-a50d-63daf18be31b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5819513d62c6e673cd2b619172c1227dd610e1bedb41f2bc32145c4540a78ebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:29:35.601924 systemd[1]: Started cri-containerd-ea9fde5f822e119c850dc7abf40193f0d4fca31ca693755afc7d18dfe1452daf.scope - libcontainer container ea9fde5f822e119c850dc7abf40193f0d4fca31ca693755afc7d18dfe1452daf. Jan 15 00:29:35.726000 audit: BPF prog-id=170 op=LOAD Jan 15 00:29:35.726000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.748262 kernel: audit: type=1334 audit(1768436975.726:556): prog-id=170 op=LOAD Jan 15 00:29:35.748389 kernel: audit: type=1300 audit(1768436975.726:556): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.748604 kernel: audit: type=1327 audit(1768436975.726:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.762672 kernel: audit: type=1334 audit(1768436975.726:557): prog-id=171 op=LOAD Jan 15 00:29:35.726000 audit: BPF prog-id=171 op=LOAD Jan 15 00:29:35.726000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.786351 kernel: audit: type=1300 audit(1768436975.726:557): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.786555 kernel: audit: type=1327 audit(1768436975.726:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.808998 kernel: audit: type=1334 audit(1768436975.726:558): prog-id=171 op=UNLOAD Jan 15 00:29:35.726000 audit: BPF prog-id=171 op=UNLOAD Jan 15 00:29:35.726000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.845426 kernel: audit: type=1300 audit(1768436975.726:558): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.855518 kernel: audit: type=1327 audit(1768436975.726:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.855589 kernel: audit: type=1334 audit(1768436975.726:559): prog-id=170 op=UNLOAD Jan 15 00:29:35.726000 audit: BPF prog-id=170 op=UNLOAD Jan 15 00:29:35.726000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.726000 audit: BPF prog-id=172 op=LOAD Jan 15 00:29:35.726000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3341 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:35.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561396664653566383232653131396338353064633761626634303139 Jan 15 00:29:35.866744 containerd[1681]: time="2026-01-15T00:29:35.866598737Z" level=info msg="StartContainer for \"ea9fde5f822e119c850dc7abf40193f0d4fca31ca693755afc7d18dfe1452daf\" returns successfully" Jan 15 00:29:35.935308 kubelet[2801]: E0115 00:29:35.935126 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:36.087371 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 00:29:36.088066 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 00:29:36.305404 kubelet[2801]: I0115 00:29:36.305086 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v2kd2" podStartSLOduration=2.036099652 podStartE2EDuration="24.305059243s" podCreationTimestamp="2026-01-15 00:29:12 +0000 UTC" firstStartedPulling="2026-01-15 00:29:13.061724383 +0000 UTC m=+24.847003485" lastFinishedPulling="2026-01-15 00:29:35.330683973 +0000 UTC m=+47.115963076" observedRunningTime="2026-01-15 00:29:35.977231501 +0000 UTC m=+47.762510623" watchObservedRunningTime="2026-01-15 00:29:36.305059243 +0000 UTC m=+48.090338356" Jan 15 00:29:36.361202 kubelet[2801]: I0115 00:29:36.361126 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptgc9\" (UniqueName: \"kubernetes.io/projected/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-kube-api-access-ptgc9\") pod \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\" (UID: \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\") " Jan 15 00:29:36.361952 kubelet[2801]: I0115 00:29:36.361887 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-backend-key-pair\") pod \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\" (UID: \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\") " Jan 15 00:29:36.362018 kubelet[2801]: I0115 00:29:36.361995 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-ca-bundle\") pod \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\" (UID: \"dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385\") " Jan 15 00:29:36.363142 kubelet[2801]: I0115 00:29:36.362978 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385" (UID: "dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 00:29:36.377700 kubelet[2801]: I0115 00:29:36.377582 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385" (UID: "dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 00:29:36.383299 kubelet[2801]: I0115 00:29:36.383184 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-kube-api-access-ptgc9" (OuterVolumeSpecName: "kube-api-access-ptgc9") pod "dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385" (UID: "dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385"). InnerVolumeSpecName "kube-api-access-ptgc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 00:29:36.403220 systemd[1]: var-lib-kubelet-pods-dc018ed4\x2defb3\x2d4ec1\x2d94f0\x2ddfe4d7ed1385-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dptgc9.mount: Deactivated successfully. Jan 15 00:29:36.403513 systemd[1]: var-lib-kubelet-pods-dc018ed4\x2defb3\x2d4ec1\x2d94f0\x2ddfe4d7ed1385-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 00:29:36.463584 kubelet[2801]: I0115 00:29:36.463442 2801 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptgc9\" (UniqueName: \"kubernetes.io/projected/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-kube-api-access-ptgc9\") on node \"localhost\" DevicePath \"\"" Jan 15 00:29:36.463584 kubelet[2801]: I0115 00:29:36.463569 2801 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 15 00:29:36.463584 kubelet[2801]: I0115 00:29:36.463582 2801 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 15 00:29:36.937936 kubelet[2801]: E0115 00:29:36.937738 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:36.946194 systemd[1]: Removed slice kubepods-besteffort-poddc018ed4_efb3_4ec1_94f0_dfe4d7ed1385.slice - libcontainer container kubepods-besteffort-poddc018ed4_efb3_4ec1_94f0_dfe4d7ed1385.slice. Jan 15 00:29:37.042892 systemd[1]: Created slice kubepods-besteffort-pod3f98e226_1bab_40f4_84c7_2ec1cf926463.slice - libcontainer container kubepods-besteffort-pod3f98e226_1bab_40f4_84c7_2ec1cf926463.slice. Jan 15 00:29:37.069733 kubelet[2801]: I0115 00:29:37.069650 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9gt\" (UniqueName: \"kubernetes.io/projected/3f98e226-1bab-40f4-84c7-2ec1cf926463-kube-api-access-wr9gt\") pod \"whisker-74646ff747-bfw76\" (UID: \"3f98e226-1bab-40f4-84c7-2ec1cf926463\") " pod="calico-system/whisker-74646ff747-bfw76" Jan 15 00:29:37.069733 kubelet[2801]: I0115 00:29:37.069714 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f98e226-1bab-40f4-84c7-2ec1cf926463-whisker-backend-key-pair\") pod \"whisker-74646ff747-bfw76\" (UID: \"3f98e226-1bab-40f4-84c7-2ec1cf926463\") " pod="calico-system/whisker-74646ff747-bfw76" Jan 15 00:29:37.069733 kubelet[2801]: I0115 00:29:37.069746 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f98e226-1bab-40f4-84c7-2ec1cf926463-whisker-ca-bundle\") pod \"whisker-74646ff747-bfw76\" (UID: \"3f98e226-1bab-40f4-84c7-2ec1cf926463\") " pod="calico-system/whisker-74646ff747-bfw76" Jan 15 00:29:37.353617 containerd[1681]: time="2026-01-15T00:29:37.353429142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74646ff747-bfw76,Uid:3f98e226-1bab-40f4-84c7-2ec1cf926463,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:37.684090 systemd-networkd[1541]: calicf5ad9fbc9c: Link UP Jan 15 00:29:37.688347 systemd-networkd[1541]: calicf5ad9fbc9c: Gained carrier Jan 15 00:29:37.711183 containerd[1681]: 2026-01-15 00:29:37.424 [INFO][4326] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 00:29:37.711183 containerd[1681]: 2026-01-15 00:29:37.462 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74646ff747--bfw76-eth0 whisker-74646ff747- calico-system 3f98e226-1bab-40f4-84c7-2ec1cf926463 939 0 2026-01-15 00:29:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74646ff747 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74646ff747-bfw76 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicf5ad9fbc9c [] [] }} ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-" Jan 15 00:29:37.711183 containerd[1681]: 2026-01-15 00:29:37.462 [INFO][4326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-eth0" Jan 15 00:29:37.711183 containerd[1681]: 2026-01-15 00:29:37.592 [INFO][4343] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" HandleID="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Workload="localhost-k8s-whisker--74646ff747--bfw76-eth0" Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.593 [INFO][4343] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" HandleID="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Workload="localhost-k8s-whisker--74646ff747--bfw76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000511280), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74646ff747-bfw76", "timestamp":"2026-01-15 00:29:37.592422654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.593 [INFO][4343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.594 [INFO][4343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.594 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.610 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" host="localhost" Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.623 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.631 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.633 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.637 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:37.711863 containerd[1681]: 2026-01-15 00:29:37.637 [INFO][4343] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" host="localhost" Jan 15 00:29:37.712437 containerd[1681]: 2026-01-15 00:29:37.639 [INFO][4343] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f Jan 15 00:29:37.712437 containerd[1681]: 2026-01-15 00:29:37.646 [INFO][4343] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" host="localhost" Jan 15 00:29:37.712437 containerd[1681]: 2026-01-15 00:29:37.655 [INFO][4343] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" host="localhost" Jan 15 00:29:37.712437 containerd[1681]: 2026-01-15 00:29:37.656 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" host="localhost" Jan 15 00:29:37.712437 containerd[1681]: 2026-01-15 00:29:37.656 [INFO][4343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:37.712437 containerd[1681]: 2026-01-15 00:29:37.656 [INFO][4343] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" HandleID="k8s-pod-network.0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Workload="localhost-k8s-whisker--74646ff747--bfw76-eth0" Jan 15 00:29:37.713061 containerd[1681]: 2026-01-15 00:29:37.663 [INFO][4326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74646ff747--bfw76-eth0", GenerateName:"whisker-74646ff747-", Namespace:"calico-system", SelfLink:"", UID:"3f98e226-1bab-40f4-84c7-2ec1cf926463", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74646ff747", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74646ff747-bfw76", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf5ad9fbc9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:37.713061 containerd[1681]: 2026-01-15 00:29:37.663 [INFO][4326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-eth0" Jan 15 00:29:37.713651 containerd[1681]: 2026-01-15 00:29:37.663 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf5ad9fbc9c ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-eth0" Jan 15 00:29:37.713651 containerd[1681]: 2026-01-15 00:29:37.678 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-eth0" Jan 15 00:29:37.715919 containerd[1681]: 2026-01-15 00:29:37.679 [INFO][4326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74646ff747--bfw76-eth0", GenerateName:"whisker-74646ff747-", Namespace:"calico-system", SelfLink:"", UID:"3f98e226-1bab-40f4-84c7-2ec1cf926463", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74646ff747", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f", Pod:"whisker-74646ff747-bfw76", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf5ad9fbc9c", MAC:"46:eb:1b:9d:e3:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:37.716071 containerd[1681]: 2026-01-15 00:29:37.697 [INFO][4326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" Namespace="calico-system" Pod="whisker-74646ff747-bfw76" WorkloadEndpoint="localhost-k8s-whisker--74646ff747--bfw76-eth0" Jan 15 00:29:37.828320 containerd[1681]: time="2026-01-15T00:29:37.827905052Z" level=info msg="connecting to shim 0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f" address="unix:///run/containerd/s/fcfa6247b959e41d99a9f25ab302cc5d20e1dd3680ee925ffe6f628e8cccf84c" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:37.941932 kubelet[2801]: E0115 00:29:37.941649 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:37.983328 systemd[1]: Started cri-containerd-0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f.scope - libcontainer container 0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f. Jan 15 00:29:38.079000 audit: BPF prog-id=173 op=LOAD Jan 15 00:29:38.081000 audit: BPF prog-id=174 op=LOAD Jan 15 00:29:38.081000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4457 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065623139643963393930303561636162303234336338633933326437 Jan 15 00:29:38.081000 audit: BPF prog-id=174 op=UNLOAD Jan 15 00:29:38.081000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065623139643963393930303561636162303234336338633933326437 Jan 15 00:29:38.082000 audit: BPF prog-id=175 op=LOAD Jan 15 00:29:38.082000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4457 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065623139643963393930303561636162303234336338633933326437 Jan 15 00:29:38.082000 audit: BPF prog-id=176 op=LOAD Jan 15 00:29:38.082000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4457 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065623139643963393930303561636162303234336338633933326437 Jan 15 00:29:38.082000 audit: BPF prog-id=176 op=UNLOAD Jan 15 00:29:38.082000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065623139643963393930303561636162303234336338633933326437 Jan 15 00:29:38.082000 audit: BPF prog-id=175 op=UNLOAD Jan 15 00:29:38.082000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065623139643963393930303561636162303234336338633933326437 Jan 15 00:29:38.082000 audit: BPF prog-id=177 op=LOAD Jan 15 00:29:38.082000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4457 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065623139643963393930303561636162303234336338633933326437 Jan 15 00:29:38.091388 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:38.175000 audit: BPF prog-id=178 op=LOAD Jan 15 00:29:38.175000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdef697520 a2=98 a3=1fffffffffffffff items=0 ppid=4373 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.175000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:29:38.175000 audit: BPF prog-id=178 op=UNLOAD Jan 15 00:29:38.175000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdef6974f0 a3=0 items=0 ppid=4373 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.175000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:29:38.175000 audit: BPF prog-id=179 op=LOAD Jan 15 00:29:38.175000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdef697400 a2=94 a3=3 items=0 ppid=4373 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.175000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:29:38.176000 audit: BPF prog-id=179 op=UNLOAD Jan 15 00:29:38.176000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdef697400 a2=94 a3=3 items=0 ppid=4373 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.176000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:29:38.176000 audit: BPF prog-id=180 op=LOAD Jan 15 00:29:38.176000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdef697440 a2=94 a3=7ffdef697620 items=0 ppid=4373 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.176000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:29:38.176000 audit: BPF prog-id=180 op=UNLOAD Jan 15 00:29:38.176000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdef697440 a2=94 a3=7ffdef697620 items=0 ppid=4373 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.176000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:29:38.189000 audit: BPF prog-id=181 op=LOAD Jan 15 00:29:38.189000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe360cadb0 a2=98 a3=3 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.189000 audit: BPF prog-id=181 op=UNLOAD Jan 15 00:29:38.189000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe360cad80 a3=0 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.189000 audit: BPF prog-id=182 op=LOAD Jan 15 00:29:38.189000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe360caba0 a2=94 a3=54428f items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.189000 audit: BPF prog-id=182 op=UNLOAD Jan 15 00:29:38.189000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe360caba0 a2=94 a3=54428f items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.189000 audit: BPF prog-id=183 op=LOAD Jan 15 00:29:38.189000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe360cabd0 a2=94 a3=2 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.189000 audit: BPF prog-id=183 op=UNLOAD Jan 15 00:29:38.189000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe360cabd0 a2=0 a3=2 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.219120 containerd[1681]: time="2026-01-15T00:29:38.218911380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74646ff747-bfw76,Uid:3f98e226-1bab-40f4-84c7-2ec1cf926463,Namespace:calico-system,Attempt:0,} returns sandbox id \"0eb19d9c99005acab0243c8c932d72bf49b8f005bf808e8d861c4293be2fbd6f\"" Jan 15 00:29:38.228846 containerd[1681]: time="2026-01-15T00:29:38.228701821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:29:38.326838 containerd[1681]: time="2026-01-15T00:29:38.326637489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:38.328272 containerd[1681]: time="2026-01-15T00:29:38.328185227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:38.328272 containerd[1681]: time="2026-01-15T00:29:38.328262221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:29:38.328726 kubelet[2801]: E0115 00:29:38.328639 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:29:38.328926 kubelet[2801]: E0115 00:29:38.328742 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:29:38.347883 kubelet[2801]: E0115 00:29:38.347679 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:61a58bd17c8b4c158bfe37780c202536,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wr9gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74646ff747-bfw76_calico-system(3f98e226-1bab-40f4-84c7-2ec1cf926463): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:38.353197 containerd[1681]: time="2026-01-15T00:29:38.353109866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:29:38.353919 kubelet[2801]: I0115 00:29:38.353445 2801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385" path="/var/lib/kubelet/pods/dc018ed4-efb3-4ec1-94f0-dfe4d7ed1385/volumes" Jan 15 00:29:38.415749 containerd[1681]: time="2026-01-15T00:29:38.415604961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:38.418326 containerd[1681]: time="2026-01-15T00:29:38.418171206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:29:38.418326 containerd[1681]: time="2026-01-15T00:29:38.418292238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:38.418860 kubelet[2801]: E0115 00:29:38.418535 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:29:38.418860 kubelet[2801]: E0115 00:29:38.418597 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:29:38.419011 kubelet[2801]: E0115 00:29:38.418733 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr9gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74646ff747-bfw76_calico-system(3f98e226-1bab-40f4-84c7-2ec1cf926463): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:38.420447 kubelet[2801]: E0115 00:29:38.420380 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74646ff747-bfw76" podUID="3f98e226-1bab-40f4-84c7-2ec1cf926463" Jan 15 00:29:38.497000 audit: BPF prog-id=184 op=LOAD Jan 15 00:29:38.497000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe360caa90 a2=94 a3=1 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.497000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.497000 audit: BPF prog-id=184 op=UNLOAD Jan 15 00:29:38.497000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe360caa90 a2=94 a3=1 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.497000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.509000 audit: BPF prog-id=185 op=LOAD Jan 15 00:29:38.509000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe360caa80 a2=94 a3=4 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.509000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.509000 audit: BPF prog-id=185 op=UNLOAD Jan 15 00:29:38.509000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe360caa80 a2=0 a3=4 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.509000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.509000 audit: BPF prog-id=186 op=LOAD Jan 15 00:29:38.509000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe360ca8e0 a2=94 a3=5 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.509000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.509000 audit: BPF prog-id=186 op=UNLOAD Jan 15 00:29:38.509000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe360ca8e0 a2=0 a3=5 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.509000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.509000 audit: BPF prog-id=187 op=LOAD Jan 15 00:29:38.509000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe360cab00 a2=94 a3=6 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.509000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.510000 audit: BPF prog-id=187 op=UNLOAD Jan 15 00:29:38.510000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe360cab00 a2=0 a3=6 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.510000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.510000 audit: BPF prog-id=188 op=LOAD Jan 15 00:29:38.510000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe360ca2b0 a2=94 a3=88 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.510000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.511000 audit: BPF prog-id=189 op=LOAD Jan 15 00:29:38.511000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe360ca130 a2=94 a3=2 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.511000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.512000 audit: BPF prog-id=189 op=UNLOAD Jan 15 00:29:38.512000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe360ca160 a2=0 a3=7ffe360ca260 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.512000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.512000 audit: BPF prog-id=188 op=UNLOAD Jan 15 00:29:38.512000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1a6e2d10 a2=0 a3=587dc877abb7a583 items=0 ppid=4373 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.512000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:29:38.528000 audit: BPF prog-id=190 op=LOAD Jan 15 00:29:38.528000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff891b6b0 a2=98 a3=1999999999999999 items=0 ppid=4373 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:29:38.528000 audit: BPF prog-id=190 op=UNLOAD Jan 15 00:29:38.528000 audit[4562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff891b680 a3=0 items=0 ppid=4373 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:29:38.528000 audit: BPF prog-id=191 op=LOAD Jan 15 00:29:38.528000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff891b590 a2=94 a3=ffff items=0 ppid=4373 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:29:38.528000 audit: BPF prog-id=191 op=UNLOAD Jan 15 00:29:38.528000 audit[4562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff891b590 a2=94 a3=ffff items=0 ppid=4373 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:29:38.528000 audit: BPF prog-id=192 op=LOAD Jan 15 00:29:38.528000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff891b5d0 a2=94 a3=7ffff891b7b0 items=0 ppid=4373 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:29:38.528000 audit: BPF prog-id=192 op=UNLOAD Jan 15 00:29:38.528000 audit[4562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff891b5d0 a2=94 a3=7ffff891b7b0 items=0 ppid=4373 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:29:38.684621 systemd-networkd[1541]: vxlan.calico: Link UP Jan 15 00:29:38.684684 systemd-networkd[1541]: vxlan.calico: Gained carrier Jan 15 00:29:38.725000 audit: BPF prog-id=193 op=LOAD Jan 15 00:29:38.725000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeca3122f0 a2=98 a3=0 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.725000 audit: BPF prog-id=193 op=UNLOAD Jan 15 00:29:38.725000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeca3122c0 a3=0 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=194 op=LOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeca312100 a2=94 a3=54428f items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=194 op=UNLOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeca312100 a2=94 a3=54428f items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=195 op=LOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeca312130 a2=94 a3=2 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=195 op=UNLOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeca312130 a2=0 a3=2 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=196 op=LOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeca311ee0 a2=94 a3=4 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=196 op=UNLOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeca311ee0 a2=94 a3=4 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=197 op=LOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeca311fe0 a2=94 a3=7ffeca312160 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.726000 audit: BPF prog-id=197 op=UNLOAD Jan 15 00:29:38.726000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeca311fe0 a2=0 a3=7ffeca312160 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.728000 audit: BPF prog-id=198 op=LOAD Jan 15 00:29:38.728000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeca311710 a2=94 a3=2 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.728000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.728000 audit: BPF prog-id=198 op=UNLOAD Jan 15 00:29:38.728000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeca311710 a2=0 a3=2 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.728000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.728000 audit: BPF prog-id=199 op=LOAD Jan 15 00:29:38.728000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeca311810 a2=94 a3=30 items=0 ppid=4373 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.728000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:29:38.745000 audit: BPF prog-id=200 op=LOAD Jan 15 00:29:38.745000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd42ebe940 a2=98 a3=0 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.745000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.745000 audit: BPF prog-id=200 op=UNLOAD Jan 15 00:29:38.745000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd42ebe910 a3=0 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.745000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.746000 audit: BPF prog-id=201 op=LOAD Jan 15 00:29:38.746000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd42ebe730 a2=94 a3=54428f items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.746000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.746000 audit: BPF prog-id=201 op=UNLOAD Jan 15 00:29:38.746000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd42ebe730 a2=94 a3=54428f items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.746000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.746000 audit: BPF prog-id=202 op=LOAD Jan 15 00:29:38.746000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd42ebe760 a2=94 a3=2 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.746000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.747000 audit: BPF prog-id=202 op=UNLOAD Jan 15 00:29:38.747000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd42ebe760 a2=0 a3=2 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.747000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.938000 audit: BPF prog-id=203 op=LOAD Jan 15 00:29:38.938000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd42ebe620 a2=94 a3=1 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.938000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.938000 audit: BPF prog-id=203 op=UNLOAD Jan 15 00:29:38.938000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd42ebe620 a2=94 a3=1 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.938000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.950258 kubelet[2801]: E0115 00:29:38.950112 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74646ff747-bfw76" podUID="3f98e226-1bab-40f4-84c7-2ec1cf926463" Jan 15 00:29:38.951000 audit: BPF prog-id=204 op=LOAD Jan 15 00:29:38.951000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd42ebe610 a2=94 a3=4 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.951000 audit: BPF prog-id=204 op=UNLOAD Jan 15 00:29:38.951000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd42ebe610 a2=0 a3=4 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.952000 audit: BPF prog-id=205 op=LOAD Jan 15 00:29:38.952000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd42ebe470 a2=94 a3=5 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.952000 audit: BPF prog-id=205 op=UNLOAD Jan 15 00:29:38.952000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd42ebe470 a2=0 a3=5 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.952000 audit: BPF prog-id=206 op=LOAD Jan 15 00:29:38.952000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd42ebe690 a2=94 a3=6 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.952000 audit: BPF prog-id=206 op=UNLOAD Jan 15 00:29:38.952000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd42ebe690 a2=0 a3=6 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.952000 audit: BPF prog-id=207 op=LOAD Jan 15 00:29:38.952000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd42ebde40 a2=94 a3=88 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.953000 audit: BPF prog-id=208 op=LOAD Jan 15 00:29:38.953000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd42ebdcc0 a2=94 a3=2 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.953000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.953000 audit: BPF prog-id=208 op=UNLOAD Jan 15 00:29:38.953000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd42ebdcf0 a2=0 a3=7ffd42ebddf0 items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.953000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.954000 audit: BPF prog-id=207 op=UNLOAD Jan 15 00:29:38.954000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=14afdd10 a2=0 a3=113904e52db1dc6e items=0 ppid=4373 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.954000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:29:38.971000 audit: BPF prog-id=199 op=UNLOAD Jan 15 00:29:38.971000 audit[4373]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000d5e580 a2=0 a3=0 items=0 ppid=4368 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.971000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 15 00:29:38.986000 audit[4604]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:38.986000 audit[4604]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff147a5330 a2=0 a3=7fff147a531c items=0 ppid=2911 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:38.993000 audit[4604]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:38.993000 audit[4604]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff147a5330 a2=0 a3=0 items=0 ppid=2911 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:38.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:39.068000 audit[4624]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4624 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:39.068000 audit[4624]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdb11181c0 a2=0 a3=7ffdb11181ac items=0 ppid=4373 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:39.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:39.082000 audit[4629]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4629 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:39.082000 audit[4629]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffa60e36a0 a2=0 a3=7fffa60e368c items=0 ppid=4373 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:39.082000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:39.082000 audit[4622]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4622 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:39.082000 audit[4622]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff47204060 a2=0 a3=7fff4720404c items=0 ppid=4373 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:39.082000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:39.091000 audit[4626]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:39.091000 audit[4626]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffca3b93c60 a2=0 a3=7ffca3b93c4c items=0 ppid=4373 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:39.091000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:39.697542 systemd-networkd[1541]: calicf5ad9fbc9c: Gained IPv6LL Jan 15 00:29:39.950547 systemd-networkd[1541]: vxlan.calico: Gained IPv6LL Jan 15 00:29:39.954176 kubelet[2801]: E0115 00:29:39.953721 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74646ff747-bfw76" podUID="3f98e226-1bab-40f4-84c7-2ec1cf926463" Jan 15 00:29:43.529705 systemd[1]: Started sshd@7-10.0.0.47:22-10.0.0.1:35182.service - OpenSSH per-connection server daemon (10.0.0.1:35182). Jan 15 00:29:43.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.47:22-10.0.0.1:35182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:43.533478 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 15 00:29:43.533575 kernel: audit: type=1130 audit(1768436983.528:637): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.47:22-10.0.0.1:35182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:43.661000 audit[4641]: USER_ACCT pid=4641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.665700 sshd[4641]: Accepted publickey for core from 10.0.0.1 port 35182 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:29:43.666930 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:29:43.663000 audit[4641]: CRED_ACQ pid=4641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.683375 systemd-logind[1646]: New session 8 of user core. Jan 15 00:29:43.691104 kernel: audit: type=1101 audit(1768436983.661:638): pid=4641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.691197 kernel: audit: type=1103 audit(1768436983.663:639): pid=4641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.691226 kernel: audit: type=1006 audit(1768436983.663:640): pid=4641 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 15 00:29:43.663000 audit[4641]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd999df370 a2=3 a3=0 items=0 ppid=1 pid=4641 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:43.708949 kernel: audit: type=1300 audit(1768436983.663:640): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd999df370 a2=3 a3=0 items=0 ppid=1 pid=4641 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:43.709046 kernel: audit: type=1327 audit(1768436983.663:640): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:43.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:43.710228 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 00:29:43.712000 audit[4641]: USER_START pid=4641 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.715000 audit[4644]: CRED_ACQ pid=4644 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.741611 kernel: audit: type=1105 audit(1768436983.712:641): pid=4641 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.741756 kernel: audit: type=1103 audit(1768436983.715:642): pid=4644 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.975235 sshd[4644]: Connection closed by 10.0.0.1 port 35182 Jan 15 00:29:43.976024 sshd-session[4641]: pam_unix(sshd:session): session closed for user core Jan 15 00:29:43.977000 audit[4641]: USER_END pid=4641 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:44.001001 kernel: audit: type=1106 audit(1768436983.977:643): pid=4641 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.983254 systemd[1]: sshd@7-10.0.0.47:22-10.0.0.1:35182.service: Deactivated successfully. Jan 15 00:29:43.986968 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 00:29:43.990974 systemd-logind[1646]: Session 8 logged out. Waiting for processes to exit. Jan 15 00:29:43.993312 systemd-logind[1646]: Removed session 8. Jan 15 00:29:43.978000 audit[4641]: CRED_DISP pid=4641 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:43.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.47:22-10.0.0.1:35182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:44.020957 kernel: audit: type=1104 audit(1768436983.978:644): pid=4641 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:44.336184 containerd[1681]: time="2026-01-15T00:29:44.335989178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b75c74db4-kxw6j,Uid:aa219502-b65d-488f-aa83-975822920d6e,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:44.701761 systemd-networkd[1541]: calic6eec279073: Link UP Jan 15 00:29:44.702891 systemd-networkd[1541]: calic6eec279073: Gained carrier Jan 15 00:29:44.729863 containerd[1681]: 2026-01-15 00:29:44.474 [INFO][4671] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0 calico-kube-controllers-b75c74db4- calico-system aa219502-b65d-488f-aa83-975822920d6e 841 0 2026-01-15 00:29:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b75c74db4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b75c74db4-kxw6j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic6eec279073 [] [] }} ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-" Jan 15 00:29:44.729863 containerd[1681]: 2026-01-15 00:29:44.475 [INFO][4671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" Jan 15 00:29:44.729863 containerd[1681]: 2026-01-15 00:29:44.560 [INFO][4686] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" HandleID="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Workload="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.561 [INFO][4686] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" HandleID="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Workload="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000522f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b75c74db4-kxw6j", "timestamp":"2026-01-15 00:29:44.560686436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.561 [INFO][4686] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.561 [INFO][4686] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.561 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.573 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" host="localhost" Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.585 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.597 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.603 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.607 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:44.732018 containerd[1681]: 2026-01-15 00:29:44.607 [INFO][4686] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" host="localhost" Jan 15 00:29:44.732646 containerd[1681]: 2026-01-15 00:29:44.610 [INFO][4686] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958 Jan 15 00:29:44.732646 containerd[1681]: 2026-01-15 00:29:44.630 [INFO][4686] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" host="localhost" Jan 15 00:29:44.732646 containerd[1681]: 2026-01-15 00:29:44.651 [INFO][4686] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" host="localhost" Jan 15 00:29:44.732646 containerd[1681]: 2026-01-15 00:29:44.652 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" host="localhost" Jan 15 00:29:44.732646 containerd[1681]: 2026-01-15 00:29:44.653 [INFO][4686] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:44.732646 containerd[1681]: 2026-01-15 00:29:44.653 [INFO][4686] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" HandleID="k8s-pod-network.c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Workload="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" Jan 15 00:29:44.733584 containerd[1681]: 2026-01-15 00:29:44.694 [INFO][4671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0", GenerateName:"calico-kube-controllers-b75c74db4-", Namespace:"calico-system", SelfLink:"", UID:"aa219502-b65d-488f-aa83-975822920d6e", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b75c74db4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b75c74db4-kxw6j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6eec279073", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:44.733762 containerd[1681]: 2026-01-15 00:29:44.694 [INFO][4671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" Jan 15 00:29:44.733762 containerd[1681]: 2026-01-15 00:29:44.694 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6eec279073 ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" Jan 15 00:29:44.733762 containerd[1681]: 2026-01-15 00:29:44.704 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" Jan 15 00:29:44.734092 containerd[1681]: 2026-01-15 00:29:44.705 [INFO][4671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0", GenerateName:"calico-kube-controllers-b75c74db4-", Namespace:"calico-system", SelfLink:"", UID:"aa219502-b65d-488f-aa83-975822920d6e", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b75c74db4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958", Pod:"calico-kube-controllers-b75c74db4-kxw6j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6eec279073", MAC:"8a:45:2c:25:f5:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:44.734256 containerd[1681]: 2026-01-15 00:29:44.724 [INFO][4671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" Namespace="calico-system" Pod="calico-kube-controllers-b75c74db4-kxw6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b75c74db4--kxw6j-eth0" Jan 15 00:29:44.760000 audit[4703]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:44.760000 audit[4703]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffe58766cf0 a2=0 a3=7ffe58766cdc items=0 ppid=4373 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.760000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:44.778919 containerd[1681]: time="2026-01-15T00:29:44.778761899Z" level=info msg="connecting to shim c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958" address="unix:///run/containerd/s/9a58acdf2f2a310dad4c8e417a578c9841e07b1d7b32d0fb9145750fc1ccf3d5" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:44.841161 systemd[1]: Started cri-containerd-c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958.scope - libcontainer container c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958. Jan 15 00:29:44.867000 audit: BPF prog-id=209 op=LOAD Jan 15 00:29:44.868000 audit: BPF prog-id=210 op=LOAD Jan 15 00:29:44.868000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383933333032316637643836633065383133313334613938326437 Jan 15 00:29:44.868000 audit: BPF prog-id=210 op=UNLOAD Jan 15 00:29:44.868000 audit[4723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383933333032316637643836633065383133313334613938326437 Jan 15 00:29:44.869000 audit: BPF prog-id=211 op=LOAD Jan 15 00:29:44.869000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383933333032316637643836633065383133313334613938326437 Jan 15 00:29:44.869000 audit: BPF prog-id=212 op=LOAD Jan 15 00:29:44.869000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383933333032316637643836633065383133313334613938326437 Jan 15 00:29:44.869000 audit: BPF prog-id=212 op=UNLOAD Jan 15 00:29:44.869000 audit[4723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383933333032316637643836633065383133313334613938326437 Jan 15 00:29:44.870000 audit: BPF prog-id=211 op=UNLOAD Jan 15 00:29:44.870000 audit[4723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383933333032316637643836633065383133313334613938326437 Jan 15 00:29:44.870000 audit: BPF prog-id=213 op=LOAD Jan 15 00:29:44.870000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4712 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:44.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383933333032316637643836633065383133313334613938326437 Jan 15 00:29:44.876240 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:44.949697 containerd[1681]: time="2026-01-15T00:29:44.949441697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b75c74db4-kxw6j,Uid:aa219502-b65d-488f-aa83-975822920d6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c58933021f7d86c0e813134a982d7ac814a8d4d71100ce4baef5056685e81958\"" Jan 15 00:29:44.954104 containerd[1681]: time="2026-01-15T00:29:44.953964308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:29:45.036559 containerd[1681]: time="2026-01-15T00:29:45.036298596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:45.038272 containerd[1681]: time="2026-01-15T00:29:45.038135840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:29:45.038272 containerd[1681]: time="2026-01-15T00:29:45.038210780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:45.038896 kubelet[2801]: E0115 00:29:45.038710 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:29:45.040299 kubelet[2801]: E0115 00:29:45.038914 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:29:45.040299 kubelet[2801]: E0115 00:29:45.039156 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl7dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b75c74db4-kxw6j_calico-system(aa219502-b65d-488f-aa83-975822920d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:45.041205 kubelet[2801]: E0115 00:29:45.041119 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" podUID="aa219502-b65d-488f-aa83-975822920d6e" Jan 15 00:29:45.335679 kubelet[2801]: E0115 00:29:45.335412 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:45.336673 containerd[1681]: time="2026-01-15T00:29:45.336143592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qsbfm,Uid:93a98979-ee0c-4100-b4c9-9eb82d024b13,Namespace:kube-system,Attempt:0,}" Jan 15 00:29:45.337168 containerd[1681]: time="2026-01-15T00:29:45.336845994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-f8t27,Uid:4b2ac921-bc9c-4449-a6c1-96910dec381e,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:45.518895 systemd-networkd[1541]: cali0fd82f6d6ae: Link UP Jan 15 00:29:45.520578 systemd-networkd[1541]: cali0fd82f6d6ae: Gained carrier Jan 15 00:29:45.542761 containerd[1681]: 2026-01-15 00:29:45.410 [INFO][4749] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0 coredns-668d6bf9bc- kube-system 93a98979-ee0c-4100-b4c9-9eb82d024b13 839 0 2026-01-15 00:28:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qsbfm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0fd82f6d6ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-" Jan 15 00:29:45.542761 containerd[1681]: 2026-01-15 00:29:45.410 [INFO][4749] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" Jan 15 00:29:45.542761 containerd[1681]: 2026-01-15 00:29:45.467 [INFO][4777] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" HandleID="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Workload="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.468 [INFO][4777] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" HandleID="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Workload="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4280), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qsbfm", "timestamp":"2026-01-15 00:29:45.46776493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.468 [INFO][4777] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.468 [INFO][4777] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.468 [INFO][4777] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.475 [INFO][4777] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" host="localhost" Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.480 [INFO][4777] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.486 [INFO][4777] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.489 [INFO][4777] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.492 [INFO][4777] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:45.543736 containerd[1681]: 2026-01-15 00:29:45.492 [INFO][4777] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" host="localhost" Jan 15 00:29:45.544201 containerd[1681]: 2026-01-15 00:29:45.494 [INFO][4777] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a Jan 15 00:29:45.544201 containerd[1681]: 2026-01-15 00:29:45.499 [INFO][4777] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" host="localhost" Jan 15 00:29:45.544201 containerd[1681]: 2026-01-15 00:29:45.505 [INFO][4777] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" host="localhost" Jan 15 00:29:45.544201 containerd[1681]: 2026-01-15 00:29:45.505 [INFO][4777] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" host="localhost" Jan 15 00:29:45.544201 containerd[1681]: 2026-01-15 00:29:45.506 [INFO][4777] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:45.544201 containerd[1681]: 2026-01-15 00:29:45.507 [INFO][4777] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" HandleID="k8s-pod-network.418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Workload="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" Jan 15 00:29:45.544481 containerd[1681]: 2026-01-15 00:29:45.513 [INFO][4749] cni-plugin/k8s.go 418: Populated endpoint ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"93a98979-ee0c-4100-b4c9-9eb82d024b13", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qsbfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fd82f6d6ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:45.544625 containerd[1681]: 2026-01-15 00:29:45.514 [INFO][4749] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" Jan 15 00:29:45.544625 containerd[1681]: 2026-01-15 00:29:45.514 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fd82f6d6ae ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" Jan 15 00:29:45.544625 containerd[1681]: 2026-01-15 00:29:45.521 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" Jan 15 00:29:45.544706 containerd[1681]: 2026-01-15 00:29:45.522 [INFO][4749] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"93a98979-ee0c-4100-b4c9-9eb82d024b13", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a", Pod:"coredns-668d6bf9bc-qsbfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fd82f6d6ae", MAC:"fe:a1:20:46:39:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:45.544706 containerd[1681]: 2026-01-15 00:29:45.538 [INFO][4749] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-qsbfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qsbfm-eth0" Jan 15 00:29:45.564000 audit[4806]: NETFILTER_CFG table=filter:126 family=2 entries=52 op=nft_register_chain pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:45.564000 audit[4806]: SYSCALL arch=c000003e syscall=46 success=yes exit=26592 a0=3 a1=7ffe7967d390 a2=0 a3=7ffe7967d37c items=0 ppid=4373 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.564000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:45.612471 containerd[1681]: time="2026-01-15T00:29:45.612116300Z" level=info msg="connecting to shim 418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a" address="unix:///run/containerd/s/af27fdc868ac19538c46071fc657e313fbdbaa70d59ac6a589a3a2517228ba85" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:45.639064 systemd-networkd[1541]: cali4e2b155e3c1: Link UP Jan 15 00:29:45.641000 systemd-networkd[1541]: cali4e2b155e3c1: Gained carrier Jan 15 00:29:45.679343 systemd[1]: Started cri-containerd-418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a.scope - libcontainer container 418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a. Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.419 [INFO][4757] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0 calico-apiserver-77c9fcdc8c- calico-apiserver 4b2ac921-bc9c-4449-a6c1-96910dec381e 838 0 2026-01-15 00:29:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c9fcdc8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c9fcdc8c-f8t27 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e2b155e3c1 [] [] }} ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.419 [INFO][4757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.469 [INFO][4783] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" HandleID="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Workload="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.470 [INFO][4783] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" HandleID="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Workload="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c9fcdc8c-f8t27", "timestamp":"2026-01-15 00:29:45.469952578 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.470 [INFO][4783] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.506 [INFO][4783] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.506 [INFO][4783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.578 [INFO][4783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.589 [INFO][4783] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.596 [INFO][4783] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.599 [INFO][4783] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.603 [INFO][4783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.603 [INFO][4783] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.607 [INFO][4783] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765 Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.616 [INFO][4783] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.627 [INFO][4783] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.628 [INFO][4783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" host="localhost" Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.628 [INFO][4783] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:45.682448 containerd[1681]: 2026-01-15 00:29:45.628 [INFO][4783] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" HandleID="k8s-pod-network.df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Workload="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" Jan 15 00:29:45.683430 containerd[1681]: 2026-01-15 00:29:45.632 [INFO][4757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0", GenerateName:"calico-apiserver-77c9fcdc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b2ac921-bc9c-4449-a6c1-96910dec381e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c9fcdc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c9fcdc8c-f8t27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e2b155e3c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:45.683430 containerd[1681]: 2026-01-15 00:29:45.632 [INFO][4757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" Jan 15 00:29:45.683430 containerd[1681]: 2026-01-15 00:29:45.632 [INFO][4757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e2b155e3c1 ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" Jan 15 00:29:45.683430 containerd[1681]: 2026-01-15 00:29:45.642 [INFO][4757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" Jan 15 00:29:45.683430 containerd[1681]: 2026-01-15 00:29:45.643 [INFO][4757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0", GenerateName:"calico-apiserver-77c9fcdc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b2ac921-bc9c-4449-a6c1-96910dec381e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c9fcdc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765", Pod:"calico-apiserver-77c9fcdc8c-f8t27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e2b155e3c1", MAC:"0e:a5:e3:ed:d7:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:45.683430 containerd[1681]: 2026-01-15 00:29:45.663 [INFO][4757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-f8t27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--f8t27-eth0" Jan 15 00:29:45.699000 audit[4853]: NETFILTER_CFG table=filter:127 family=2 entries=54 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:45.699000 audit[4853]: SYSCALL arch=c000003e syscall=46 success=yes exit=29380 a0=3 a1=7ffea04b5bc0 a2=0 a3=7ffea04b5bac items=0 ppid=4373 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.699000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:45.711000 audit: BPF prog-id=214 op=LOAD Jan 15 00:29:45.714000 audit: BPF prog-id=215 op=LOAD Jan 15 00:29:45.714000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4816 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431386162343335303138333739306361366635326265653333303031 Jan 15 00:29:45.714000 audit: BPF prog-id=215 op=UNLOAD Jan 15 00:29:45.714000 audit[4828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431386162343335303138333739306361366635326265653333303031 Jan 15 00:29:45.715000 audit: BPF prog-id=216 op=LOAD Jan 15 00:29:45.715000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4816 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431386162343335303138333739306361366635326265653333303031 Jan 15 00:29:45.715000 audit: BPF prog-id=217 op=LOAD Jan 15 00:29:45.715000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4816 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431386162343335303138333739306361366635326265653333303031 Jan 15 00:29:45.716000 audit: BPF prog-id=217 op=UNLOAD Jan 15 00:29:45.716000 audit[4828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431386162343335303138333739306361366635326265653333303031 Jan 15 00:29:45.716000 audit: BPF prog-id=216 op=UNLOAD Jan 15 00:29:45.716000 audit[4828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431386162343335303138333739306361366635326265653333303031 Jan 15 00:29:45.716000 audit: BPF prog-id=218 op=LOAD Jan 15 00:29:45.716000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4816 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431386162343335303138333739306361366635326265653333303031 Jan 15 00:29:45.720284 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:45.752280 containerd[1681]: time="2026-01-15T00:29:45.751943729Z" level=info msg="connecting to shim df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765" address="unix:///run/containerd/s/e581aa0f7b6644db9896ec82f219348ec1b36fc59afaad3b52655bb59cce01fe" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:45.806766 containerd[1681]: time="2026-01-15T00:29:45.806664340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qsbfm,Uid:93a98979-ee0c-4100-b4c9-9eb82d024b13,Namespace:kube-system,Attempt:0,} returns sandbox id \"418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a\"" Jan 15 00:29:45.809006 kubelet[2801]: E0115 00:29:45.808182 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:45.810237 containerd[1681]: time="2026-01-15T00:29:45.810140660Z" level=info msg="CreateContainer within sandbox \"418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:29:45.828691 systemd[1]: Started cri-containerd-df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765.scope - libcontainer container df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765. Jan 15 00:29:45.835903 containerd[1681]: time="2026-01-15T00:29:45.834936882Z" level=info msg="Container 86f853448b492252c52ffe05ede241146b31fae45b2130c7b6be9c4be4ba0597: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:29:45.846180 containerd[1681]: time="2026-01-15T00:29:45.846041715Z" level=info msg="CreateContainer within sandbox \"418ab4350183790ca6f52bee330015c66874681e09a469ce34cfa2db6c826c7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86f853448b492252c52ffe05ede241146b31fae45b2130c7b6be9c4be4ba0597\"" Jan 15 00:29:45.852051 containerd[1681]: time="2026-01-15T00:29:45.852019363Z" level=info msg="StartContainer for \"86f853448b492252c52ffe05ede241146b31fae45b2130c7b6be9c4be4ba0597\"" Jan 15 00:29:45.853576 containerd[1681]: time="2026-01-15T00:29:45.853212680Z" level=info msg="connecting to shim 86f853448b492252c52ffe05ede241146b31fae45b2130c7b6be9c4be4ba0597" address="unix:///run/containerd/s/af27fdc868ac19538c46071fc657e313fbdbaa70d59ac6a589a3a2517228ba85" protocol=ttrpc version=3 Jan 15 00:29:45.871000 audit: BPF prog-id=219 op=LOAD Jan 15 00:29:45.872000 audit: BPF prog-id=220 op=LOAD Jan 15 00:29:45.872000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4867 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466363334633939366531303663396266346234396333633434333931 Jan 15 00:29:45.872000 audit: BPF prog-id=220 op=UNLOAD Jan 15 00:29:45.872000 audit[4877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4867 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466363334633939366531303663396266346234396333633434333931 Jan 15 00:29:45.873000 audit: BPF prog-id=221 op=LOAD Jan 15 00:29:45.873000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4867 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466363334633939366531303663396266346234396333633434333931 Jan 15 00:29:45.874000 audit: BPF prog-id=222 op=LOAD Jan 15 00:29:45.874000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4867 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466363334633939366531303663396266346234396333633434333931 Jan 15 00:29:45.874000 audit: BPF prog-id=222 op=UNLOAD Jan 15 00:29:45.874000 audit[4877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4867 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466363334633939366531303663396266346234396333633434333931 Jan 15 00:29:45.874000 audit: BPF prog-id=221 op=UNLOAD Jan 15 00:29:45.874000 audit[4877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4867 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466363334633939366531303663396266346234396333633434333931 Jan 15 00:29:45.875000 audit: BPF prog-id=223 op=LOAD Jan 15 00:29:45.875000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4867 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466363334633939366531303663396266346234396333633434333931 Jan 15 00:29:45.880038 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:45.911365 systemd[1]: Started cri-containerd-86f853448b492252c52ffe05ede241146b31fae45b2130c7b6be9c4be4ba0597.scope - libcontainer container 86f853448b492252c52ffe05ede241146b31fae45b2130c7b6be9c4be4ba0597. Jan 15 00:29:45.950308 containerd[1681]: time="2026-01-15T00:29:45.950235862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-f8t27,Uid:4b2ac921-bc9c-4449-a6c1-96910dec381e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"df634c996e106c9bf4b49c3c44391e3c37250454c3c4e66146a880582984b765\"" Jan 15 00:29:45.954733 containerd[1681]: time="2026-01-15T00:29:45.954657816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:29:45.954000 audit: BPF prog-id=224 op=LOAD Jan 15 00:29:45.956000 audit: BPF prog-id=225 op=LOAD Jan 15 00:29:45.956000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4816 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663835333434386234393232353263353266666530356564653234 Jan 15 00:29:45.957000 audit: BPF prog-id=225 op=UNLOAD Jan 15 00:29:45.957000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663835333434386234393232353263353266666530356564653234 Jan 15 00:29:45.957000 audit: BPF prog-id=226 op=LOAD Jan 15 00:29:45.957000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4816 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663835333434386234393232353263353266666530356564653234 Jan 15 00:29:45.958000 audit: BPF prog-id=227 op=LOAD Jan 15 00:29:45.958000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4816 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663835333434386234393232353263353266666530356564653234 Jan 15 00:29:45.958000 audit: BPF prog-id=227 op=UNLOAD Jan 15 00:29:45.958000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663835333434386234393232353263353266666530356564653234 Jan 15 00:29:45.958000 audit: BPF prog-id=226 op=UNLOAD Jan 15 00:29:45.958000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663835333434386234393232353263353266666530356564653234 Jan 15 00:29:45.959000 audit: BPF prog-id=228 op=LOAD Jan 15 00:29:45.959000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4816 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:45.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836663835333434386234393232353263353266666530356564653234 Jan 15 00:29:46.004457 containerd[1681]: time="2026-01-15T00:29:46.004257265Z" level=info msg="StartContainer for \"86f853448b492252c52ffe05ede241146b31fae45b2130c7b6be9c4be4ba0597\" returns successfully" Jan 15 00:29:46.014958 kubelet[2801]: E0115 00:29:46.014710 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" podUID="aa219502-b65d-488f-aa83-975822920d6e" Jan 15 00:29:46.025153 containerd[1681]: time="2026-01-15T00:29:46.025040873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:46.029030 containerd[1681]: time="2026-01-15T00:29:46.028929561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:29:46.029339 containerd[1681]: time="2026-01-15T00:29:46.029040789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:46.030178 kubelet[2801]: E0115 00:29:46.030051 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:29:46.030248 kubelet[2801]: E0115 00:29:46.030184 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:29:46.030445 kubelet[2801]: E0115 00:29:46.030289 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-976wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c9fcdc8c-f8t27_calico-apiserver(4b2ac921-bc9c-4449-a6c1-96910dec381e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:46.031680 kubelet[2801]: E0115 00:29:46.031652 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" podUID="4b2ac921-bc9c-4449-a6c1-96910dec381e" Jan 15 00:29:46.340808 containerd[1681]: time="2026-01-15T00:29:46.340673231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b544d645f-gznqk,Uid:6241b949-d82f-4e04-b2b8-fdb1cda43b39,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:46.341381 containerd[1681]: time="2026-01-15T00:29:46.341216741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-wjfr8,Uid:ab27570c-5eb0-4b1f-9c2f-ecefc027b548,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:29:46.548953 systemd-networkd[1541]: cali3acce47dc0c: Link UP Jan 15 00:29:46.550967 systemd-networkd[1541]: cali3acce47dc0c: Gained carrier Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.414 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0 calico-apiserver-77c9fcdc8c- calico-apiserver ab27570c-5eb0-4b1f-9c2f-ecefc027b548 844 0 2026-01-15 00:29:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c9fcdc8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c9fcdc8c-wjfr8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3acce47dc0c [] [] }} ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.414 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.487 [INFO][4974] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" HandleID="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Workload="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.487 [INFO][4974] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" HandleID="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Workload="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c9fcdc8c-wjfr8", "timestamp":"2026-01-15 00:29:46.487168831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.487 [INFO][4974] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.487 [INFO][4974] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.487 [INFO][4974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.496 [INFO][4974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.503 [INFO][4974] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.515 [INFO][4974] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.518 [INFO][4974] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.522 [INFO][4974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.522 [INFO][4974] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.524 [INFO][4974] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627 Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.530 [INFO][4974] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.537 [INFO][4974] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.537 [INFO][4974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" host="localhost" Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.538 [INFO][4974] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:46.568195 containerd[1681]: 2026-01-15 00:29:46.538 [INFO][4974] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" HandleID="k8s-pod-network.b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Workload="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" Jan 15 00:29:46.570927 containerd[1681]: 2026-01-15 00:29:46.541 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0", GenerateName:"calico-apiserver-77c9fcdc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab27570c-5eb0-4b1f-9c2f-ecefc027b548", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c9fcdc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c9fcdc8c-wjfr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3acce47dc0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:46.570927 containerd[1681]: 2026-01-15 00:29:46.542 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" Jan 15 00:29:46.570927 containerd[1681]: 2026-01-15 00:29:46.542 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3acce47dc0c ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" Jan 15 00:29:46.570927 containerd[1681]: 2026-01-15 00:29:46.552 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" Jan 15 00:29:46.570927 containerd[1681]: 2026-01-15 00:29:46.552 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0", GenerateName:"calico-apiserver-77c9fcdc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab27570c-5eb0-4b1f-9c2f-ecefc027b548", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c9fcdc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627", Pod:"calico-apiserver-77c9fcdc8c-wjfr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3acce47dc0c", MAC:"22:d4:ec:e4:4b:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:46.570927 containerd[1681]: 2026-01-15 00:29:46.563 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" Namespace="calico-apiserver" Pod="calico-apiserver-77c9fcdc8c-wjfr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c9fcdc8c--wjfr8-eth0" Jan 15 00:29:46.585194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718150545.mount: Deactivated successfully. Jan 15 00:29:46.593000 audit[4999]: NETFILTER_CFG table=filter:128 family=2 entries=45 op=nft_register_chain pid=4999 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:46.593000 audit[4999]: SYSCALL arch=c000003e syscall=46 success=yes exit=24248 a0=3 a1=7ffc2e398340 a2=0 a3=7ffc2e39832c items=0 ppid=4373 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.593000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:46.629178 containerd[1681]: time="2026-01-15T00:29:46.627970211Z" level=info msg="connecting to shim b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627" address="unix:///run/containerd/s/782da4f58fcde05373574ca53177879ced6257612e0c5a3ca2c7113794831df7" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:46.706948 systemd-networkd[1541]: cali96702b3c66a: Link UP Jan 15 00:29:46.708066 systemd[1]: Started cri-containerd-b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627.scope - libcontainer container b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627. Jan 15 00:29:46.711355 systemd-networkd[1541]: cali96702b3c66a: Gained carrier Jan 15 00:29:46.734022 systemd-networkd[1541]: calic6eec279073: Gained IPv6LL Jan 15 00:29:46.734000 audit: BPF prog-id=229 op=LOAD Jan 15 00:29:46.736000 audit: BPF prog-id=230 op=LOAD Jan 15 00:29:46.736000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5009 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336235663064336135376538393833643539313362313865616136 Jan 15 00:29:46.736000 audit: BPF prog-id=230 op=UNLOAD Jan 15 00:29:46.736000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5009 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336235663064336135376538393833643539313362313865616136 Jan 15 00:29:46.736000 audit: BPF prog-id=231 op=LOAD Jan 15 00:29:46.736000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5009 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336235663064336135376538393833643539313362313865616136 Jan 15 00:29:46.736000 audit: BPF prog-id=232 op=LOAD Jan 15 00:29:46.736000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5009 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336235663064336135376538393833643539313362313865616136 Jan 15 00:29:46.736000 audit: BPF prog-id=232 op=UNLOAD Jan 15 00:29:46.736000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5009 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336235663064336135376538393833643539313362313865616136 Jan 15 00:29:46.736000 audit: BPF prog-id=231 op=UNLOAD Jan 15 00:29:46.736000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5009 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336235663064336135376538393833643539313362313865616136 Jan 15 00:29:46.736000 audit: BPF prog-id=233 op=LOAD Jan 15 00:29:46.736000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5009 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336235663064336135376538393833643539313362313865616136 Jan 15 00:29:46.740907 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.426 [INFO][4946] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0 calico-apiserver-6b544d645f- calico-apiserver 6241b949-d82f-4e04-b2b8-fdb1cda43b39 843 0 2026-01-15 00:29:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b544d645f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b544d645f-gznqk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96702b3c66a [] [] }} ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.428 [INFO][4946] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.489 [INFO][4976] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" HandleID="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Workload="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.489 [INFO][4976] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" HandleID="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Workload="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b544d645f-gznqk", "timestamp":"2026-01-15 00:29:46.489110687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.489 [INFO][4976] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.538 [INFO][4976] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.538 [INFO][4976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.603 [INFO][4976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.616 [INFO][4976] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.626 [INFO][4976] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.634 [INFO][4976] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.640 [INFO][4976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.640 [INFO][4976] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.648 [INFO][4976] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.658 [INFO][4976] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.683 [INFO][4976] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.684 [INFO][4976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" host="localhost" Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.685 [INFO][4976] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:46.754581 containerd[1681]: 2026-01-15 00:29:46.685 [INFO][4976] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" HandleID="k8s-pod-network.552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Workload="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" Jan 15 00:29:46.756223 containerd[1681]: 2026-01-15 00:29:46.696 [INFO][4946] cni-plugin/k8s.go 418: Populated endpoint ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0", GenerateName:"calico-apiserver-6b544d645f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6241b949-d82f-4e04-b2b8-fdb1cda43b39", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b544d645f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b544d645f-gznqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96702b3c66a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:46.756223 containerd[1681]: 2026-01-15 00:29:46.696 [INFO][4946] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" Jan 15 00:29:46.756223 containerd[1681]: 2026-01-15 00:29:46.696 [INFO][4946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96702b3c66a ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" Jan 15 00:29:46.756223 containerd[1681]: 2026-01-15 00:29:46.721 [INFO][4946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" Jan 15 00:29:46.756223 containerd[1681]: 2026-01-15 00:29:46.722 [INFO][4946] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0", GenerateName:"calico-apiserver-6b544d645f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6241b949-d82f-4e04-b2b8-fdb1cda43b39", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b544d645f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f", Pod:"calico-apiserver-6b544d645f-gznqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96702b3c66a", MAC:"16:9d:57:51:3a:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:46.756223 containerd[1681]: 2026-01-15 00:29:46.745 [INFO][4946] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" Namespace="calico-apiserver" Pod="calico-apiserver-6b544d645f-gznqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b544d645f--gznqk-eth0" Jan 15 00:29:46.814031 containerd[1681]: time="2026-01-15T00:29:46.813757994Z" level=info msg="connecting to shim 552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f" address="unix:///run/containerd/s/b4015e43ef244d444805f6e516ed55c509170284a673390f1f06c2063e0b6ae3" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:46.817000 audit[5050]: NETFILTER_CFG table=filter:129 family=2 entries=49 op=nft_register_chain pid=5050 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:46.817000 audit[5050]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffc566d4270 a2=0 a3=7ffc566d425c items=0 ppid=4373 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.817000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:46.864331 systemd-networkd[1541]: cali0fd82f6d6ae: Gained IPv6LL Jan 15 00:29:46.896349 containerd[1681]: time="2026-01-15T00:29:46.896195270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c9fcdc8c-wjfr8,Uid:ab27570c-5eb0-4b1f-9c2f-ecefc027b548,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b23b5f0d3a57e8983d5913b18eaa6ad8784aaa4f94ae60396aea0d80e79d6627\"" Jan 15 00:29:46.899125 systemd[1]: Started cri-containerd-552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f.scope - libcontainer container 552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f. Jan 15 00:29:46.911136 containerd[1681]: time="2026-01-15T00:29:46.911057504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:29:46.951000 audit: BPF prog-id=234 op=LOAD Jan 15 00:29:46.952000 audit: BPF prog-id=235 op=LOAD Jan 15 00:29:46.952000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5056 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535323034313736306561303034336261333236663961643033653563 Jan 15 00:29:46.952000 audit: BPF prog-id=235 op=UNLOAD Jan 15 00:29:46.952000 audit[5070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5056 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535323034313736306561303034336261333236663961643033653563 Jan 15 00:29:46.953000 audit: BPF prog-id=236 op=LOAD Jan 15 00:29:46.953000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5056 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535323034313736306561303034336261333236663961643033653563 Jan 15 00:29:46.953000 audit: BPF prog-id=237 op=LOAD Jan 15 00:29:46.953000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5056 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535323034313736306561303034336261333236663961643033653563 Jan 15 00:29:46.953000 audit: BPF prog-id=237 op=UNLOAD Jan 15 00:29:46.953000 audit[5070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5056 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535323034313736306561303034336261333236663961643033653563 Jan 15 00:29:46.953000 audit: BPF prog-id=236 op=UNLOAD Jan 15 00:29:46.953000 audit[5070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5056 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535323034313736306561303034336261333236663961643033653563 Jan 15 00:29:46.953000 audit: BPF prog-id=238 op=LOAD Jan 15 00:29:46.953000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5056 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:46.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535323034313736306561303034336261333236663961643033653563 Jan 15 00:29:46.957298 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:46.979719 containerd[1681]: time="2026-01-15T00:29:46.979014958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:46.981758 containerd[1681]: time="2026-01-15T00:29:46.981472946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:29:46.981758 containerd[1681]: time="2026-01-15T00:29:46.981586823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:46.982000 kubelet[2801]: E0115 00:29:46.981753 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:29:46.982000 kubelet[2801]: E0115 00:29:46.981870 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:29:46.983914 kubelet[2801]: E0115 00:29:46.982014 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54hkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c9fcdc8c-wjfr8_calico-apiserver(ab27570c-5eb0-4b1f-9c2f-ecefc027b548): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:46.983914 kubelet[2801]: E0115 00:29:46.983353 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:29:47.026598 kubelet[2801]: E0115 00:29:47.026434 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:47.035228 kubelet[2801]: E0115 00:29:47.035147 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" podUID="4b2ac921-bc9c-4449-a6c1-96910dec381e" Jan 15 00:29:47.035720 kubelet[2801]: E0115 00:29:47.035281 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:29:47.053292 kubelet[2801]: I0115 00:29:47.053175 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qsbfm" podStartSLOduration=52.052892434 podStartE2EDuration="52.052892434s" podCreationTimestamp="2026-01-15 00:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:29:47.050352854 +0000 UTC m=+58.835631975" watchObservedRunningTime="2026-01-15 00:29:47.052892434 +0000 UTC m=+58.838171536" Jan 15 00:29:47.054599 containerd[1681]: time="2026-01-15T00:29:47.053072150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b544d645f-gznqk,Uid:6241b949-d82f-4e04-b2b8-fdb1cda43b39,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"552041760ea0043ba326f9ad03e5c697ac6a6d844f9fb8c2b96e7061a250748f\"" Jan 15 00:29:47.068831 containerd[1681]: time="2026-01-15T00:29:47.068610613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:29:47.126000 audit[5105]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:47.126000 audit[5105]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd842ea10 a2=0 a3=7fffd842e9fc items=0 ppid=2911 pid=5105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:47.137000 audit[5105]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:47.137000 audit[5105]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffd842ea10 a2=0 a3=0 items=0 ppid=2911 pid=5105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:47.165114 containerd[1681]: time="2026-01-15T00:29:47.165047951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:47.166920 containerd[1681]: time="2026-01-15T00:29:47.166736720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:29:47.167296 containerd[1681]: time="2026-01-15T00:29:47.167039004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:47.168073 kubelet[2801]: E0115 00:29:47.167931 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:29:47.168073 kubelet[2801]: E0115 00:29:47.168050 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:29:47.168456 kubelet[2801]: E0115 00:29:47.168266 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpxxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b544d645f-gznqk_calico-apiserver(6241b949-d82f-4e04-b2b8-fdb1cda43b39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:47.170841 kubelet[2801]: E0115 00:29:47.170067 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:29:47.182000 audit[5107]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:47.182000 audit[5107]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf2ef3d20 a2=0 a3=7ffcf2ef3d0c items=0 ppid=2911 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:47.197000 audit[5107]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:47.197000 audit[5107]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf2ef3d20 a2=0 a3=0 items=0 ppid=2911 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:47.335303 kubelet[2801]: E0115 00:29:47.335146 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:47.336075 containerd[1681]: time="2026-01-15T00:29:47.335882409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7z9q,Uid:a4865d61-f273-4528-ab08-e1c0e1f3c4fa,Namespace:kube-system,Attempt:0,}" Jan 15 00:29:47.438970 systemd-networkd[1541]: cali4e2b155e3c1: Gained IPv6LL Jan 15 00:29:47.550688 systemd-networkd[1541]: calie3db32765c9: Link UP Jan 15 00:29:47.556423 systemd-networkd[1541]: calie3db32765c9: Gained carrier Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.409 [INFO][5109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0 coredns-668d6bf9bc- kube-system a4865d61-f273-4528-ab08-e1c0e1f3c4fa 832 0 2026-01-15 00:28:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d7z9q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie3db32765c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.409 [INFO][5109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.476 [INFO][5122] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" HandleID="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Workload="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.478 [INFO][5122] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" HandleID="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Workload="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d7z9q", "timestamp":"2026-01-15 00:29:47.476684379 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.479 [INFO][5122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.480 [INFO][5122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.480 [INFO][5122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.495 [INFO][5122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.504 [INFO][5122] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.512 [INFO][5122] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.515 [INFO][5122] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.518 [INFO][5122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.518 [INFO][5122] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.520 [INFO][5122] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.527 [INFO][5122] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.536 [INFO][5122] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.537 [INFO][5122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" host="localhost" Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.537 [INFO][5122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:47.592223 containerd[1681]: 2026-01-15 00:29:47.537 [INFO][5122] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" HandleID="k8s-pod-network.87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Workload="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" Jan 15 00:29:47.594679 containerd[1681]: 2026-01-15 00:29:47.543 [INFO][5109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4865d61-f273-4528-ab08-e1c0e1f3c4fa", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d7z9q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3db32765c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:47.594679 containerd[1681]: 2026-01-15 00:29:47.543 [INFO][5109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" Jan 15 00:29:47.594679 containerd[1681]: 2026-01-15 00:29:47.543 [INFO][5109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3db32765c9 ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" Jan 15 00:29:47.594679 containerd[1681]: 2026-01-15 00:29:47.558 [INFO][5109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" Jan 15 00:29:47.594679 containerd[1681]: 2026-01-15 00:29:47.561 [INFO][5109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4865d61-f273-4528-ab08-e1c0e1f3c4fa", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d", Pod:"coredns-668d6bf9bc-d7z9q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3db32765c9", MAC:"16:40:e4:dc:10:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:47.594679 containerd[1681]: 2026-01-15 00:29:47.577 [INFO][5109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7z9q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7z9q-eth0" Jan 15 00:29:47.612000 audit[5140]: NETFILTER_CFG table=filter:134 family=2 entries=48 op=nft_register_chain pid=5140 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:47.612000 audit[5140]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffccd6fc960 a2=0 a3=7ffccd6fc94c items=0 ppid=4373 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.612000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:47.639487 containerd[1681]: time="2026-01-15T00:29:47.639153409Z" level=info msg="connecting to shim 87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d" address="unix:///run/containerd/s/8b2af159a0248af01a7211d9c6ed62cecd815ce59a07adcde4bdc7bc7f1ed6a8" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:47.705278 systemd[1]: Started cri-containerd-87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d.scope - libcontainer container 87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d. Jan 15 00:29:47.730000 audit: BPF prog-id=239 op=LOAD Jan 15 00:29:47.731000 audit: BPF prog-id=240 op=LOAD Jan 15 00:29:47.731000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5150 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363636393934663234396665666535306532346566303861636436 Jan 15 00:29:47.731000 audit: BPF prog-id=240 op=UNLOAD Jan 15 00:29:47.731000 audit[5161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5150 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363636393934663234396665666535306532346566303861636436 Jan 15 00:29:47.731000 audit: BPF prog-id=241 op=LOAD Jan 15 00:29:47.731000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5150 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363636393934663234396665666535306532346566303861636436 Jan 15 00:29:47.731000 audit: BPF prog-id=242 op=LOAD Jan 15 00:29:47.731000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5150 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363636393934663234396665666535306532346566303861636436 Jan 15 00:29:47.731000 audit: BPF prog-id=242 op=UNLOAD Jan 15 00:29:47.731000 audit[5161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5150 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363636393934663234396665666535306532346566303861636436 Jan 15 00:29:47.731000 audit: BPF prog-id=241 op=UNLOAD Jan 15 00:29:47.731000 audit[5161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5150 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363636393934663234396665666535306532346566303861636436 Jan 15 00:29:47.731000 audit: BPF prog-id=243 op=LOAD Jan 15 00:29:47.731000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5150 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363636393934663234396665666535306532346566303861636436 Jan 15 00:29:47.736691 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:47.825395 containerd[1681]: time="2026-01-15T00:29:47.825160290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7z9q,Uid:a4865d61-f273-4528-ab08-e1c0e1f3c4fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d\"" Jan 15 00:29:47.827417 kubelet[2801]: E0115 00:29:47.827315 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:47.831824 containerd[1681]: time="2026-01-15T00:29:47.831663925Z" level=info msg="CreateContainer within sandbox \"87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:29:47.853038 containerd[1681]: time="2026-01-15T00:29:47.852897509Z" level=info msg="Container 16e9274a902fb2e03ecfc2576afc52aab79cbff337dbebae564f22af9095713a: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:29:47.863043 containerd[1681]: time="2026-01-15T00:29:47.862754902Z" level=info msg="CreateContainer within sandbox \"87666994f249fefe50e24ef08acd6b3508235199847dd79b600f1f0d8a44785d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16e9274a902fb2e03ecfc2576afc52aab79cbff337dbebae564f22af9095713a\"" Jan 15 00:29:47.864238 containerd[1681]: time="2026-01-15T00:29:47.864173033Z" level=info msg="StartContainer for \"16e9274a902fb2e03ecfc2576afc52aab79cbff337dbebae564f22af9095713a\"" Jan 15 00:29:47.865411 containerd[1681]: time="2026-01-15T00:29:47.865350121Z" level=info msg="connecting to shim 16e9274a902fb2e03ecfc2576afc52aab79cbff337dbebae564f22af9095713a" address="unix:///run/containerd/s/8b2af159a0248af01a7211d9c6ed62cecd815ce59a07adcde4bdc7bc7f1ed6a8" protocol=ttrpc version=3 Jan 15 00:29:47.887031 systemd-networkd[1541]: cali3acce47dc0c: Gained IPv6LL Jan 15 00:29:47.912288 systemd[1]: Started cri-containerd-16e9274a902fb2e03ecfc2576afc52aab79cbff337dbebae564f22af9095713a.scope - libcontainer container 16e9274a902fb2e03ecfc2576afc52aab79cbff337dbebae564f22af9095713a. Jan 15 00:29:47.948000 audit: BPF prog-id=244 op=LOAD Jan 15 00:29:47.949000 audit: BPF prog-id=245 op=LOAD Jan 15 00:29:47.949000 audit[5188]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5150 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136653932373461393032666232653033656366633235373661666335 Jan 15 00:29:47.950000 audit: BPF prog-id=245 op=UNLOAD Jan 15 00:29:47.950000 audit[5188]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5150 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136653932373461393032666232653033656366633235373661666335 Jan 15 00:29:47.950000 audit: BPF prog-id=246 op=LOAD Jan 15 00:29:47.950000 audit[5188]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5150 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136653932373461393032666232653033656366633235373661666335 Jan 15 00:29:47.950000 audit: BPF prog-id=247 op=LOAD Jan 15 00:29:47.950000 audit[5188]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5150 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136653932373461393032666232653033656366633235373661666335 Jan 15 00:29:47.950000 audit: BPF prog-id=247 op=UNLOAD Jan 15 00:29:47.950000 audit[5188]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5150 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136653932373461393032666232653033656366633235373661666335 Jan 15 00:29:47.950000 audit: BPF prog-id=246 op=UNLOAD Jan 15 00:29:47.950000 audit[5188]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5150 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136653932373461393032666232653033656366633235373661666335 Jan 15 00:29:47.950000 audit: BPF prog-id=248 op=LOAD Jan 15 00:29:47.950000 audit[5188]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5150 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:47.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136653932373461393032666232653033656366633235373661666335 Jan 15 00:29:47.988720 containerd[1681]: time="2026-01-15T00:29:47.988150886Z" level=info msg="StartContainer for \"16e9274a902fb2e03ecfc2576afc52aab79cbff337dbebae564f22af9095713a\" returns successfully" Jan 15 00:29:48.044663 kubelet[2801]: E0115 00:29:48.044609 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:29:48.058192 kubelet[2801]: E0115 00:29:48.057644 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:48.061719 kubelet[2801]: E0115 00:29:48.061067 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:29:48.063011 kubelet[2801]: E0115 00:29:48.062578 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:48.126000 audit[5224]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:48.126000 audit[5224]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc6d01fa0 a2=0 a3=7ffcc6d01f8c items=0 ppid=2911 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:48.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:48.138000 audit[5224]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:48.138000 audit[5224]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc6d01fa0 a2=0 a3=0 items=0 ppid=2911 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:48.138000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:48.161915 kubelet[2801]: I0115 00:29:48.161825 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d7z9q" podStartSLOduration=53.161328052 podStartE2EDuration="53.161328052s" podCreationTimestamp="2026-01-15 00:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:29:48.119103724 +0000 UTC m=+59.904382856" watchObservedRunningTime="2026-01-15 00:29:48.161328052 +0000 UTC m=+59.946607143" Jan 15 00:29:48.334290 systemd-networkd[1541]: cali96702b3c66a: Gained IPv6LL Jan 15 00:29:49.000063 systemd[1]: Started sshd@8-10.0.0.47:22-10.0.0.1:35196.service - OpenSSH per-connection server daemon (10.0.0.1:35196). Jan 15 00:29:48.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.47:22-10.0.0.1:35196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:49.003006 kernel: kauditd_printk_skb: 213 callbacks suppressed Jan 15 00:29:49.003079 kernel: audit: type=1130 audit(1768436988.998:722): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.47:22-10.0.0.1:35196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:49.056069 kubelet[2801]: E0115 00:29:49.055896 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:49.056675 kubelet[2801]: E0115 00:29:49.056312 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:49.060873 kubelet[2801]: E0115 00:29:49.060180 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:29:49.137000 audit[5238]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=5238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:49.150314 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 35196 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:29:49.151025 kernel: audit: type=1325 audit(1768436989.137:723): table=filter:137 family=2 entries=17 op=nft_register_rule pid=5238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:49.137000 audit[5238]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb7bed880 a2=0 a3=7ffcb7bed86c items=0 ppid=2911 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.151431 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:29:49.164872 systemd-logind[1646]: New session 9 of user core. Jan 15 00:29:49.167949 kernel: audit: type=1300 audit(1768436989.137:723): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb7bed880 a2=0 a3=7ffcb7bed86c items=0 ppid=2911 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.168066 kernel: audit: type=1327 audit(1768436989.137:723): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:49.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:49.145000 audit[5231]: USER_ACCT pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.190589 kernel: audit: type=1101 audit(1768436989.145:724): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.190713 kernel: audit: type=1103 audit(1768436989.147:725): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.147000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.210178 kernel: audit: type=1006 audit(1768436989.147:726): pid=5231 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 15 00:29:49.210309 kernel: audit: type=1300 audit(1768436989.147:726): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4d287980 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.147000 audit[5231]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4d287980 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.226635 kernel: audit: type=1327 audit(1768436989.147:726): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:49.147000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:49.231139 systemd-networkd[1541]: calie3db32765c9: Gained IPv6LL Jan 15 00:29:49.226000 audit[5238]: NETFILTER_CFG table=nat:138 family=2 entries=47 op=nft_register_chain pid=5238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:49.226000 audit[5238]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcb7bed880 a2=0 a3=7ffcb7bed86c items=0 ppid=2911 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:49.234890 kernel: audit: type=1325 audit(1768436989.226:727): table=nat:138 family=2 entries=47 op=nft_register_chain pid=5238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:49.246315 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 00:29:49.250000 audit[5231]: USER_START pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.256000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.336235 containerd[1681]: time="2026-01-15T00:29:49.336081201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9lkl,Uid:44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:49.470162 sshd[5243]: Connection closed by 10.0.0.1 port 35196 Jan 15 00:29:49.471129 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Jan 15 00:29:49.471000 audit[5231]: USER_END pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.472000 audit[5231]: CRED_DISP pid=5231 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:49.481061 systemd[1]: sshd@8-10.0.0.47:22-10.0.0.1:35196.service: Deactivated successfully. Jan 15 00:29:49.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.47:22-10.0.0.1:35196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:49.485004 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 00:29:49.487389 systemd-logind[1646]: Session 9 logged out. Waiting for processes to exit. Jan 15 00:29:49.491925 systemd-logind[1646]: Removed session 9. Jan 15 00:29:49.583129 systemd-networkd[1541]: calia9a1ada956d: Link UP Jan 15 00:29:49.585384 systemd-networkd[1541]: calia9a1ada956d: Gained carrier Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.454 [INFO][5254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--z9lkl-eth0 csi-node-driver- calico-system 44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19 730 0 2026-01-15 00:29:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-z9lkl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia9a1ada956d [] [] }} ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.454 [INFO][5254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-eth0" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.521 [INFO][5272] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" HandleID="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Workload="localhost-k8s-csi--node--driver--z9lkl-eth0" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.521 [INFO][5272] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" HandleID="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Workload="localhost-k8s-csi--node--driver--z9lkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f690), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-z9lkl", "timestamp":"2026-01-15 00:29:49.521183848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.521 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.521 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.521 [INFO][5272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.531 [INFO][5272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.539 [INFO][5272] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.548 [INFO][5272] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.551 [INFO][5272] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.554 [INFO][5272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.554 [INFO][5272] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.557 [INFO][5272] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029 Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.563 [INFO][5272] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.573 [INFO][5272] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.573 [INFO][5272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" host="localhost" Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.573 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:49.609250 containerd[1681]: 2026-01-15 00:29:49.573 [INFO][5272] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" HandleID="k8s-pod-network.60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Workload="localhost-k8s-csi--node--driver--z9lkl-eth0" Jan 15 00:29:49.611472 containerd[1681]: 2026-01-15 00:29:49.578 [INFO][5254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z9lkl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-z9lkl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9a1ada956d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:49.611472 containerd[1681]: 2026-01-15 00:29:49.579 [INFO][5254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-eth0" Jan 15 00:29:49.611472 containerd[1681]: 2026-01-15 00:29:49.579 [INFO][5254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9a1ada956d ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-eth0" Jan 15 00:29:49.611472 containerd[1681]: 2026-01-15 00:29:49.586 [INFO][5254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-eth0" Jan 15 00:29:49.611472 containerd[1681]: 2026-01-15 00:29:49.589 [INFO][5254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z9lkl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029", Pod:"csi-node-driver-z9lkl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9a1ada956d", MAC:"96:f5:2f:fc:fc:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:49.611472 containerd[1681]: 2026-01-15 00:29:49.604 [INFO][5254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" Namespace="calico-system" Pod="csi-node-driver-z9lkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z9lkl-eth0" Jan 15 00:29:49.625000 audit[5292]: NETFILTER_CFG table=filter:139 family=2 entries=56 op=nft_register_chain pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:49.625000 audit[5292]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffd2d9b6580 a2=0 a3=7ffd2d9b656c items=0 ppid=4373 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.625000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:49.667166 containerd[1681]: time="2026-01-15T00:29:49.667067008Z" level=info msg="connecting to shim 60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029" address="unix:///run/containerd/s/a4c730db360f399913bda40611fd139fb6289d469649c02d1c1364dedee18ec7" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:49.804304 systemd[1]: Started cri-containerd-60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029.scope - libcontainer container 60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029. Jan 15 00:29:49.831000 audit: BPF prog-id=249 op=LOAD Jan 15 00:29:49.832000 audit: BPF prog-id=250 op=LOAD Jan 15 00:29:49.832000 audit[5312]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5301 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630343934633535366164373165646236316535616465383833316431 Jan 15 00:29:49.832000 audit: BPF prog-id=250 op=UNLOAD Jan 15 00:29:49.832000 audit[5312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5301 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630343934633535366164373165646236316535616465383833316431 Jan 15 00:29:49.832000 audit: BPF prog-id=251 op=LOAD Jan 15 00:29:49.832000 audit[5312]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5301 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630343934633535366164373165646236316535616465383833316431 Jan 15 00:29:49.833000 audit: BPF prog-id=252 op=LOAD Jan 15 00:29:49.833000 audit[5312]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5301 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630343934633535366164373165646236316535616465383833316431 Jan 15 00:29:49.833000 audit: BPF prog-id=252 op=UNLOAD Jan 15 00:29:49.833000 audit[5312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5301 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630343934633535366164373165646236316535616465383833316431 Jan 15 00:29:49.833000 audit: BPF prog-id=251 op=UNLOAD Jan 15 00:29:49.833000 audit[5312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5301 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630343934633535366164373165646236316535616465383833316431 Jan 15 00:29:49.833000 audit: BPF prog-id=253 op=LOAD Jan 15 00:29:49.833000 audit[5312]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5301 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:49.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630343934633535366164373165646236316535616465383833316431 Jan 15 00:29:49.838255 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:49.885759 containerd[1681]: time="2026-01-15T00:29:49.885669522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9lkl,Uid:44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19,Namespace:calico-system,Attempt:0,} returns sandbox id \"60494c556ad71edb61e5ade8831d1fc4e8f41b50b1f509f3669ac0cbb2327029\"" Jan 15 00:29:49.889025 containerd[1681]: time="2026-01-15T00:29:49.888914756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:29:49.948655 containerd[1681]: time="2026-01-15T00:29:49.948503488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:49.950636 containerd[1681]: time="2026-01-15T00:29:49.950438951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:29:49.950636 containerd[1681]: time="2026-01-15T00:29:49.950628905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:49.951140 kubelet[2801]: E0115 00:29:49.951045 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:29:49.951262 kubelet[2801]: E0115 00:29:49.951146 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:29:49.951451 kubelet[2801]: E0115 00:29:49.951343 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8bmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:49.954173 containerd[1681]: time="2026-01-15T00:29:49.954090240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:29:50.025225 containerd[1681]: time="2026-01-15T00:29:50.025088290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:50.027389 containerd[1681]: time="2026-01-15T00:29:50.027173869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:29:50.027389 containerd[1681]: time="2026-01-15T00:29:50.027279756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:50.027767 kubelet[2801]: E0115 00:29:50.027659 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:29:50.027767 kubelet[2801]: E0115 00:29:50.027712 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:29:50.027986 kubelet[2801]: E0115 00:29:50.027911 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8bmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:50.029878 kubelet[2801]: E0115 00:29:50.029158 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:50.060951 kubelet[2801]: E0115 00:29:50.060764 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:50.062129 kubelet[2801]: E0115 00:29:50.061302 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:50.063816 kubelet[2801]: E0115 00:29:50.063677 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:50.341768 containerd[1681]: time="2026-01-15T00:29:50.341501331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9sr6x,Uid:0da1261a-c922-41a4-a50d-63daf18be31b,Namespace:calico-system,Attempt:0,}" Jan 15 00:29:50.596284 systemd-networkd[1541]: cali8600a49cba8: Link UP Jan 15 00:29:50.604193 systemd-networkd[1541]: cali8600a49cba8: Gained carrier Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.437 [INFO][5340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--9sr6x-eth0 goldmane-666569f655- calico-system 0da1261a-c922-41a4-a50d-63daf18be31b 842 0 2026-01-15 00:29:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-9sr6x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8600a49cba8 [] [] }} ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.437 [INFO][5340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-eth0" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.500 [INFO][5354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" HandleID="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Workload="localhost-k8s-goldmane--666569f655--9sr6x-eth0" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.501 [INFO][5354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" HandleID="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Workload="localhost-k8s-goldmane--666569f655--9sr6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-9sr6x", "timestamp":"2026-01-15 00:29:50.500951081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.501 [INFO][5354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.501 [INFO][5354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.501 [INFO][5354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.512 [INFO][5354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.522 [INFO][5354] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.532 [INFO][5354] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.536 [INFO][5354] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.540 [INFO][5354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.540 [INFO][5354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.544 [INFO][5354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.556 [INFO][5354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.575 [INFO][5354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.577 [INFO][5354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" host="localhost" Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.577 [INFO][5354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:29:50.642945 containerd[1681]: 2026-01-15 00:29:50.577 [INFO][5354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" HandleID="k8s-pod-network.85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Workload="localhost-k8s-goldmane--666569f655--9sr6x-eth0" Jan 15 00:29:50.644668 containerd[1681]: 2026-01-15 00:29:50.584 [INFO][5340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9sr6x-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0da1261a-c922-41a4-a50d-63daf18be31b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-9sr6x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8600a49cba8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:50.644668 containerd[1681]: 2026-01-15 00:29:50.588 [INFO][5340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-eth0" Jan 15 00:29:50.644668 containerd[1681]: 2026-01-15 00:29:50.588 [INFO][5340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8600a49cba8 ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-eth0" Jan 15 00:29:50.644668 containerd[1681]: 2026-01-15 00:29:50.608 [INFO][5340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-eth0" Jan 15 00:29:50.644668 containerd[1681]: 2026-01-15 00:29:50.612 [INFO][5340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9sr6x-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0da1261a-c922-41a4-a50d-63daf18be31b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf", Pod:"goldmane-666569f655-9sr6x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8600a49cba8", MAC:"0a:b1:14:c9:96:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:29:50.644668 containerd[1681]: 2026-01-15 00:29:50.635 [INFO][5340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" Namespace="calico-system" Pod="goldmane-666569f655-9sr6x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9sr6x-eth0" Jan 15 00:29:50.674000 audit[5372]: NETFILTER_CFG table=filter:140 family=2 entries=68 op=nft_register_chain pid=5372 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:29:50.674000 audit[5372]: SYSCALL arch=c000003e syscall=46 success=yes exit=32292 a0=3 a1=7ffd2e3fc990 a2=0 a3=7ffd2e3fc97c items=0 ppid=4373 pid=5372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.674000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:29:50.688277 containerd[1681]: time="2026-01-15T00:29:50.688135705Z" level=info msg="connecting to shim 85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf" address="unix:///run/containerd/s/ab7bb13ca6afc88cb8226a99de98db7c2bd865a64873c3106f8d690a13218afb" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:29:50.747034 systemd[1]: Started cri-containerd-85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf.scope - libcontainer container 85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf. Jan 15 00:29:50.766000 audit: BPF prog-id=254 op=LOAD Jan 15 00:29:50.767000 audit: BPF prog-id=255 op=LOAD Jan 15 00:29:50.767000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5382 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835363130323632663333353934333563653735653963376265323962 Jan 15 00:29:50.767000 audit: BPF prog-id=255 op=UNLOAD Jan 15 00:29:50.767000 audit[5393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5382 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835363130323632663333353934333563653735653963376265323962 Jan 15 00:29:50.768000 audit: BPF prog-id=256 op=LOAD Jan 15 00:29:50.768000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5382 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.768000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835363130323632663333353934333563653735653963376265323962 Jan 15 00:29:50.768000 audit: BPF prog-id=257 op=LOAD Jan 15 00:29:50.768000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5382 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.768000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835363130323632663333353934333563653735653963376265323962 Jan 15 00:29:50.768000 audit: BPF prog-id=257 op=UNLOAD Jan 15 00:29:50.768000 audit[5393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5382 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.768000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835363130323632663333353934333563653735653963376265323962 Jan 15 00:29:50.768000 audit: BPF prog-id=256 op=UNLOAD Jan 15 00:29:50.768000 audit[5393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5382 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.768000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835363130323632663333353934333563653735653963376265323962 Jan 15 00:29:50.769000 audit: BPF prog-id=258 op=LOAD Jan 15 00:29:50.769000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5382 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:50.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835363130323632663333353934333563653735653963376265323962 Jan 15 00:29:50.798892 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:29:50.855667 containerd[1681]: time="2026-01-15T00:29:50.855440258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9sr6x,Uid:0da1261a-c922-41a4-a50d-63daf18be31b,Namespace:calico-system,Attempt:0,} returns sandbox id \"85610262f3359435ce75e9c7be29b8ac67bdbb76405ca3c3cc97318add2bbddf\"" Jan 15 00:29:50.858559 containerd[1681]: time="2026-01-15T00:29:50.858399763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:29:50.929086 containerd[1681]: time="2026-01-15T00:29:50.929008758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:50.931239 containerd[1681]: time="2026-01-15T00:29:50.931088813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:29:50.931239 containerd[1681]: time="2026-01-15T00:29:50.931166927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:50.931893 kubelet[2801]: E0115 00:29:50.931478 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:29:50.931893 kubelet[2801]: E0115 00:29:50.931571 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:29:50.931990 kubelet[2801]: E0115 00:29:50.931692 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bp5qp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9sr6x_calico-system(0da1261a-c922-41a4-a50d-63daf18be31b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:50.933284 kubelet[2801]: E0115 00:29:50.933158 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:29:50.959404 systemd-networkd[1541]: calia9a1ada956d: Gained IPv6LL Jan 15 00:29:51.071396 kubelet[2801]: E0115 00:29:51.071353 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:51.073634 kubelet[2801]: E0115 00:29:51.073595 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:29:51.074046 kubelet[2801]: E0115 00:29:51.073879 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:29:51.128000 audit[5420]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:51.128000 audit[5420]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe03c365f0 a2=0 a3=7ffe03c365dc items=0 ppid=2911 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:51.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:51.135000 audit[5420]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:29:51.135000 audit[5420]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe03c365f0 a2=0 a3=7ffe03c365dc items=0 ppid=2911 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:51.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:29:52.073922 kubelet[2801]: E0115 00:29:52.073602 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:29:52.111243 systemd-networkd[1541]: cali8600a49cba8: Gained IPv6LL Jan 15 00:29:52.338712 containerd[1681]: time="2026-01-15T00:29:52.338413917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:29:52.408233 containerd[1681]: time="2026-01-15T00:29:52.408148124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:52.409981 containerd[1681]: time="2026-01-15T00:29:52.409869830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:29:52.410110 containerd[1681]: time="2026-01-15T00:29:52.410077126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:52.410459 kubelet[2801]: E0115 00:29:52.410167 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:29:52.410459 kubelet[2801]: E0115 00:29:52.410264 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:29:52.410459 kubelet[2801]: E0115 00:29:52.410364 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:61a58bd17c8b4c158bfe37780c202536,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wr9gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74646ff747-bfw76_calico-system(3f98e226-1bab-40f4-84c7-2ec1cf926463): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:52.416348 containerd[1681]: time="2026-01-15T00:29:52.416269307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:29:52.478810 containerd[1681]: time="2026-01-15T00:29:52.478696788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:52.480742 containerd[1681]: time="2026-01-15T00:29:52.480672444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:29:52.481068 containerd[1681]: time="2026-01-15T00:29:52.480846196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:52.481208 kubelet[2801]: E0115 00:29:52.481143 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:29:52.481208 kubelet[2801]: E0115 00:29:52.481192 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:29:52.481319 kubelet[2801]: E0115 00:29:52.481293 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr9gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74646ff747-bfw76_calico-system(3f98e226-1bab-40f4-84c7-2ec1cf926463): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:52.482954 kubelet[2801]: E0115 00:29:52.482901 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74646ff747-bfw76" podUID="3f98e226-1bab-40f4-84c7-2ec1cf926463" Jan 15 00:29:54.488035 systemd[1]: Started sshd@9-10.0.0.47:22-10.0.0.1:47324.service - OpenSSH per-connection server daemon (10.0.0.1:47324). Jan 15 00:29:54.492438 kernel: kauditd_printk_skb: 63 callbacks suppressed Jan 15 00:29:54.492608 kernel: audit: type=1130 audit(1768436994.486:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.47:22-10.0.0.1:47324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:54.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.47:22-10.0.0.1:47324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:54.582000 audit[5422]: USER_ACCT pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.584739 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 47324 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:29:54.587762 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:29:54.597599 systemd-logind[1646]: New session 10 of user core. Jan 15 00:29:54.585000 audit[5422]: CRED_ACQ pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.620191 kernel: audit: type=1101 audit(1768436994.582:754): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.620309 kernel: audit: type=1103 audit(1768436994.585:755): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.620350 kernel: audit: type=1006 audit(1768436994.585:756): pid=5422 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 15 00:29:54.629897 kernel: audit: type=1300 audit(1768436994.585:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb92f830 a2=3 a3=0 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:54.585000 audit[5422]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb92f830 a2=3 a3=0 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:54.631136 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 00:29:54.585000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:54.655138 kernel: audit: type=1327 audit(1768436994.585:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:54.633000 audit[5422]: USER_START pid=5422 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.674208 kernel: audit: type=1105 audit(1768436994.633:757): pid=5422 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.674367 kernel: audit: type=1103 audit(1768436994.636:758): pid=5425 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.636000 audit[5425]: CRED_ACQ pid=5425 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.800391 sshd[5425]: Connection closed by 10.0.0.1 port 47324 Jan 15 00:29:54.802119 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Jan 15 00:29:54.802000 audit[5422]: USER_END pid=5422 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.802000 audit[5422]: CRED_DISP pid=5422 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.836351 kernel: audit: type=1106 audit(1768436994.802:759): pid=5422 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.836472 kernel: audit: type=1104 audit(1768436994.802:760): pid=5422 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.845088 systemd[1]: sshd@9-10.0.0.47:22-10.0.0.1:47324.service: Deactivated successfully. Jan 15 00:29:54.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.47:22-10.0.0.1:47324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:54.848424 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 00:29:54.851041 systemd-logind[1646]: Session 10 logged out. Waiting for processes to exit. Jan 15 00:29:54.856018 systemd[1]: Started sshd@10-10.0.0.47:22-10.0.0.1:47338.service - OpenSSH per-connection server daemon (10.0.0.1:47338). Jan 15 00:29:54.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.47:22-10.0.0.1:47338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:54.857913 systemd-logind[1646]: Removed session 10. Jan 15 00:29:54.935000 audit[5439]: USER_ACCT pid=5439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.938158 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 47338 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:29:54.937000 audit[5439]: CRED_ACQ pid=5439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.937000 audit[5439]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc28a10c60 a2=3 a3=0 items=0 ppid=1 pid=5439 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:54.937000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:54.940322 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:29:54.948442 systemd-logind[1646]: New session 11 of user core. Jan 15 00:29:54.957150 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 00:29:54.959000 audit[5439]: USER_START pid=5439 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:54.962000 audit[5442]: CRED_ACQ pid=5442 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.182956 sshd[5442]: Connection closed by 10.0.0.1 port 47338 Jan 15 00:29:55.183928 sshd-session[5439]: pam_unix(sshd:session): session closed for user core Jan 15 00:29:55.187000 audit[5439]: USER_END pid=5439 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.187000 audit[5439]: CRED_DISP pid=5439 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.200683 systemd[1]: sshd@10-10.0.0.47:22-10.0.0.1:47338.service: Deactivated successfully. Jan 15 00:29:55.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.47:22-10.0.0.1:47338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:55.203768 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 00:29:55.208305 systemd-logind[1646]: Session 11 logged out. Waiting for processes to exit. Jan 15 00:29:55.232661 systemd[1]: Started sshd@11-10.0.0.47:22-10.0.0.1:47340.service - OpenSSH per-connection server daemon (10.0.0.1:47340). Jan 15 00:29:55.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.47:22-10.0.0.1:47340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:55.236463 systemd-logind[1646]: Removed session 11. Jan 15 00:29:55.355000 audit[5454]: USER_ACCT pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.357983 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 47340 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:29:55.357000 audit[5454]: CRED_ACQ pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.357000 audit[5454]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0b69a5a0 a2=3 a3=0 items=0 ppid=1 pid=5454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:29:55.357000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:29:55.360383 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:29:55.370870 systemd-logind[1646]: New session 12 of user core. Jan 15 00:29:55.386245 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 00:29:55.389000 audit[5454]: USER_START pid=5454 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.393000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.531929 sshd[5457]: Connection closed by 10.0.0.1 port 47340 Jan 15 00:29:55.532197 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Jan 15 00:29:55.533000 audit[5454]: USER_END pid=5454 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.533000 audit[5454]: CRED_DISP pid=5454 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:29:55.538184 systemd[1]: sshd@11-10.0.0.47:22-10.0.0.1:47340.service: Deactivated successfully. Jan 15 00:29:55.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.47:22-10.0.0.1:47340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:29:55.541174 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 00:29:55.545703 systemd-logind[1646]: Session 12 logged out. Waiting for processes to exit. Jan 15 00:29:55.547369 systemd-logind[1646]: Removed session 12. Jan 15 00:29:57.335939 kubelet[2801]: E0115 00:29:57.335637 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:29:59.344871 containerd[1681]: time="2026-01-15T00:29:59.344690689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:29:59.412153 containerd[1681]: time="2026-01-15T00:29:59.412060762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:29:59.414693 containerd[1681]: time="2026-01-15T00:29:59.414344254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:29:59.414693 containerd[1681]: time="2026-01-15T00:29:59.414425435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:29:59.415363 kubelet[2801]: E0115 00:29:59.415165 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:29:59.415363 kubelet[2801]: E0115 00:29:59.415285 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:29:59.416151 kubelet[2801]: E0115 00:29:59.415408 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl7dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b75c74db4-kxw6j_calico-system(aa219502-b65d-488f-aa83-975822920d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:29:59.416975 kubelet[2801]: E0115 00:29:59.416928 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" podUID="aa219502-b65d-488f-aa83-975822920d6e" Jan 15 00:30:00.336762 kubelet[2801]: E0115 00:30:00.336291 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:30:00.546262 systemd[1]: Started sshd@12-10.0.0.47:22-10.0.0.1:47348.service - OpenSSH per-connection server daemon (10.0.0.1:47348). Jan 15 00:30:00.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.47:22-10.0.0.1:47348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:00.551044 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 15 00:30:00.551164 kernel: audit: type=1130 audit(1768437000.544:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.47:22-10.0.0.1:47348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:00.649000 audit[5481]: USER_ACCT pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.654407 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:00.656714 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 47348 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:00.666930 kernel: audit: type=1101 audit(1768437000.649:781): pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.651000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.688916 systemd-logind[1646]: New session 13 of user core. Jan 15 00:30:00.691379 kernel: audit: type=1103 audit(1768437000.651:782): pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.691628 kernel: audit: type=1006 audit(1768437000.651:783): pid=5481 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 15 00:30:00.691657 kernel: audit: type=1300 audit(1768437000.651:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc883007a0 a2=3 a3=0 items=0 ppid=1 pid=5481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:00.651000 audit[5481]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc883007a0 a2=3 a3=0 items=0 ppid=1 pid=5481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:00.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:00.715013 kernel: audit: type=1327 audit(1768437000.651:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:00.716751 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 00:30:00.723000 audit[5481]: USER_START pid=5481 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.746849 kernel: audit: type=1105 audit(1768437000.723:784): pid=5481 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.727000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.760967 kernel: audit: type=1103 audit(1768437000.727:785): pid=5486 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.835871 sshd[5486]: Connection closed by 10.0.0.1 port 47348 Jan 15 00:30:00.836305 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:00.837000 audit[5481]: USER_END pid=5481 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.842666 systemd[1]: sshd@12-10.0.0.47:22-10.0.0.1:47348.service: Deactivated successfully. Jan 15 00:30:00.847308 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 00:30:00.849508 systemd-logind[1646]: Session 13 logged out. Waiting for processes to exit. Jan 15 00:30:00.852572 systemd-logind[1646]: Removed session 13. Jan 15 00:30:00.837000 audit[5481]: CRED_DISP pid=5481 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.865972 kernel: audit: type=1106 audit(1768437000.837:786): pid=5481 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.866138 kernel: audit: type=1104 audit(1768437000.837:787): pid=5481 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:00.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.47:22-10.0.0.1:47348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:01.335906 kubelet[2801]: E0115 00:30:01.335742 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:30:01.339223 containerd[1681]: time="2026-01-15T00:30:01.339095072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:30:01.676225 containerd[1681]: time="2026-01-15T00:30:01.676117637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:01.678892 containerd[1681]: time="2026-01-15T00:30:01.678633232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:30:01.678892 containerd[1681]: time="2026-01-15T00:30:01.678758453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:01.679132 kubelet[2801]: E0115 00:30:01.679076 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:01.679195 kubelet[2801]: E0115 00:30:01.679140 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:01.679755 kubelet[2801]: E0115 00:30:01.679582 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54hkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c9fcdc8c-wjfr8_calico-apiserver(ab27570c-5eb0-4b1f-9c2f-ecefc027b548): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:01.680164 containerd[1681]: time="2026-01-15T00:30:01.679944658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:30:01.682542 kubelet[2801]: E0115 00:30:01.682421 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:30:01.865649 containerd[1681]: time="2026-01-15T00:30:01.865552896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:01.867539 containerd[1681]: time="2026-01-15T00:30:01.867348483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:30:01.867539 containerd[1681]: time="2026-01-15T00:30:01.867429912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:01.867669 kubelet[2801]: E0115 00:30:01.867627 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:01.867884 kubelet[2801]: E0115 00:30:01.867672 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:01.868117 kubelet[2801]: E0115 00:30:01.867956 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-976wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c9fcdc8c-f8t27_calico-apiserver(4b2ac921-bc9c-4449-a6c1-96910dec381e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:01.869538 kubelet[2801]: E0115 00:30:01.869314 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" podUID="4b2ac921-bc9c-4449-a6c1-96910dec381e" Jan 15 00:30:02.337583 containerd[1681]: time="2026-01-15T00:30:02.337304914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:30:02.410108 containerd[1681]: time="2026-01-15T00:30:02.409969727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:02.412491 containerd[1681]: time="2026-01-15T00:30:02.412337406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:30:02.412602 containerd[1681]: time="2026-01-15T00:30:02.412495699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:02.413052 kubelet[2801]: E0115 00:30:02.412715 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:02.413052 kubelet[2801]: E0115 00:30:02.413027 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:02.413617 kubelet[2801]: E0115 00:30:02.413150 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpxxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b544d645f-gznqk_calico-apiserver(6241b949-d82f-4e04-b2b8-fdb1cda43b39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:02.415174 kubelet[2801]: E0115 00:30:02.415065 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:30:05.336970 containerd[1681]: time="2026-01-15T00:30:05.336918234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:30:05.398895 containerd[1681]: time="2026-01-15T00:30:05.398663760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:05.400898 containerd[1681]: time="2026-01-15T00:30:05.400839074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:30:05.400898 containerd[1681]: time="2026-01-15T00:30:05.400870078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:05.401272 kubelet[2801]: E0115 00:30:05.401192 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:30:05.401272 kubelet[2801]: E0115 00:30:05.401231 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:30:05.401980 kubelet[2801]: E0115 00:30:05.401519 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bp5qp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9sr6x_calico-system(0da1261a-c922-41a4-a50d-63daf18be31b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:05.402182 containerd[1681]: time="2026-01-15T00:30:05.401984414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:30:05.403724 kubelet[2801]: E0115 00:30:05.403527 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:30:05.482428 containerd[1681]: time="2026-01-15T00:30:05.482291693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:05.484676 containerd[1681]: time="2026-01-15T00:30:05.484504350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:30:05.484676 containerd[1681]: time="2026-01-15T00:30:05.484637278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:05.485477 kubelet[2801]: E0115 00:30:05.485196 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:30:05.485477 kubelet[2801]: E0115 00:30:05.485366 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:30:05.485608 kubelet[2801]: E0115 00:30:05.485554 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8bmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:05.488036 containerd[1681]: time="2026-01-15T00:30:05.488001758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:30:05.575224 containerd[1681]: time="2026-01-15T00:30:05.575081638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:05.577890 containerd[1681]: time="2026-01-15T00:30:05.577557170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:30:05.577890 containerd[1681]: time="2026-01-15T00:30:05.577610755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:05.578024 kubelet[2801]: E0115 00:30:05.577934 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:30:05.578024 kubelet[2801]: E0115 00:30:05.577997 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:30:05.578571 kubelet[2801]: E0115 00:30:05.578364 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8bmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:05.580511 kubelet[2801]: E0115 00:30:05.580276 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:30:05.859394 systemd[1]: Started sshd@13-10.0.0.47:22-10.0.0.1:45206.service - OpenSSH per-connection server daemon (10.0.0.1:45206). Jan 15 00:30:05.859993 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:30:05.860046 kernel: audit: type=1130 audit(1768437005.857:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.47:22-10.0.0.1:45206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:05.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.47:22-10.0.0.1:45206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:05.940000 audit[5503]: USER_ACCT pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:05.945337 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:05.946915 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 45206 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:05.954748 systemd-logind[1646]: New session 14 of user core. Jan 15 00:30:05.943000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:05.967767 kernel: audit: type=1101 audit(1768437005.940:790): pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:05.967963 kernel: audit: type=1103 audit(1768437005.943:791): pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:05.968005 kernel: audit: type=1006 audit(1768437005.943:792): pid=5503 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 15 00:30:05.943000 audit[5503]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8edf52a0 a2=3 a3=0 items=0 ppid=1 pid=5503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:05.991911 kernel: audit: type=1300 audit(1768437005.943:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8edf52a0 a2=3 a3=0 items=0 ppid=1 pid=5503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:05.992290 kernel: audit: type=1327 audit(1768437005.943:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:05.943000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:05.999179 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 00:30:06.001000 audit[5503]: USER_START pid=5503 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.001000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.029155 kernel: audit: type=1105 audit(1768437006.001:793): pid=5503 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.029297 kernel: audit: type=1103 audit(1768437006.001:794): pid=5506 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.128654 sshd[5506]: Connection closed by 10.0.0.1 port 45206 Jan 15 00:30:06.129202 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:06.130000 audit[5503]: USER_END pid=5503 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.137364 systemd[1]: sshd@13-10.0.0.47:22-10.0.0.1:45206.service: Deactivated successfully. Jan 15 00:30:06.141530 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 00:30:06.130000 audit[5503]: CRED_DISP pid=5503 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.145755 systemd-logind[1646]: Session 14 logged out. Waiting for processes to exit. Jan 15 00:30:06.147466 systemd-logind[1646]: Removed session 14. Jan 15 00:30:06.156303 kernel: audit: type=1106 audit(1768437006.130:795): pid=5503 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.156498 kernel: audit: type=1104 audit(1768437006.130:796): pid=5503 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:06.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.47:22-10.0.0.1:45206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:07.337686 kubelet[2801]: E0115 00:30:07.337385 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74646ff747-bfw76" podUID="3f98e226-1bab-40f4-84c7-2ec1cf926463" Jan 15 00:30:11.145709 systemd[1]: Started sshd@14-10.0.0.47:22-10.0.0.1:45216.service - OpenSSH per-connection server daemon (10.0.0.1:45216). Jan 15 00:30:11.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.47:22-10.0.0.1:45216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:11.149721 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:30:11.150029 kernel: audit: type=1130 audit(1768437011.144:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.47:22-10.0.0.1:45216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:11.261000 audit[5552]: USER_ACCT pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.262611 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 45216 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:11.264556 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:11.272553 systemd-logind[1646]: New session 15 of user core. Jan 15 00:30:11.262000 audit[5552]: CRED_ACQ pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.288380 kernel: audit: type=1101 audit(1768437011.261:799): pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.288737 kernel: audit: type=1103 audit(1768437011.262:800): pid=5552 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.288767 kernel: audit: type=1006 audit(1768437011.263:801): pid=5552 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 15 00:30:11.295959 kernel: audit: type=1300 audit(1768437011.263:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0f94d930 a2=3 a3=0 items=0 ppid=1 pid=5552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:11.263000 audit[5552]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0f94d930 a2=3 a3=0 items=0 ppid=1 pid=5552 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:11.308411 kernel: audit: type=1327 audit(1768437011.263:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:11.263000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:11.315162 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 00:30:11.318000 audit[5552]: USER_START pid=5552 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.321000 audit[5555]: CRED_ACQ pid=5555 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.348600 kernel: audit: type=1105 audit(1768437011.318:802): pid=5552 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.348706 kernel: audit: type=1103 audit(1768437011.321:803): pid=5555 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.473959 sshd[5555]: Connection closed by 10.0.0.1 port 45216 Jan 15 00:30:11.474569 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:11.475000 audit[5552]: USER_END pid=5552 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.476000 audit[5552]: CRED_DISP pid=5552 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.506464 kernel: audit: type=1106 audit(1768437011.475:804): pid=5552 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.506563 kernel: audit: type=1104 audit(1768437011.476:805): pid=5552 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.512374 systemd[1]: sshd@14-10.0.0.47:22-10.0.0.1:45216.service: Deactivated successfully. Jan 15 00:30:11.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.47:22-10.0.0.1:45216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:11.515022 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 00:30:11.516379 systemd-logind[1646]: Session 15 logged out. Waiting for processes to exit. Jan 15 00:30:11.520543 systemd[1]: Started sshd@15-10.0.0.47:22-10.0.0.1:45226.service - OpenSSH per-connection server daemon (10.0.0.1:45226). Jan 15 00:30:11.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.47:22-10.0.0.1:45226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:11.522345 systemd-logind[1646]: Removed session 15. Jan 15 00:30:11.582000 audit[5570]: USER_ACCT pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.583564 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 45226 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:11.584000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.584000 audit[5570]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2be4cb30 a2=3 a3=0 items=0 ppid=1 pid=5570 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:11.584000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:11.586087 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:11.594115 systemd-logind[1646]: New session 16 of user core. Jan 15 00:30:11.608110 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 00:30:11.611000 audit[5570]: USER_START pid=5570 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.614000 audit[5573]: CRED_ACQ pid=5573 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.910242 sshd[5573]: Connection closed by 10.0.0.1 port 45226 Jan 15 00:30:11.912124 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:11.913000 audit[5570]: USER_END pid=5570 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.914000 audit[5570]: CRED_DISP pid=5570 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:11.933306 systemd[1]: sshd@15-10.0.0.47:22-10.0.0.1:45226.service: Deactivated successfully. Jan 15 00:30:11.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.47:22-10.0.0.1:45226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:11.935990 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 00:30:11.937614 systemd-logind[1646]: Session 16 logged out. Waiting for processes to exit. Jan 15 00:30:11.940984 systemd[1]: Started sshd@16-10.0.0.47:22-10.0.0.1:45230.service - OpenSSH per-connection server daemon (10.0.0.1:45230). Jan 15 00:30:11.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.47:22-10.0.0.1:45230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:11.942008 systemd-logind[1646]: Removed session 16. Jan 15 00:30:12.029000 audit[5584]: USER_ACCT pid=5584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.030664 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 45230 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:12.031000 audit[5584]: CRED_ACQ pid=5584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.031000 audit[5584]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef3359980 a2=3 a3=0 items=0 ppid=1 pid=5584 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:12.031000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:12.032700 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:12.043937 systemd-logind[1646]: New session 17 of user core. Jan 15 00:30:12.051135 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 00:30:12.055000 audit[5584]: USER_START pid=5584 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.058000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.349389 kubelet[2801]: E0115 00:30:12.349268 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" podUID="4b2ac921-bc9c-4449-a6c1-96910dec381e" Jan 15 00:30:12.766461 sshd[5587]: Connection closed by 10.0.0.1 port 45230 Jan 15 00:30:12.767127 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:12.771000 audit[5584]: USER_END pid=5584 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.771000 audit[5584]: CRED_DISP pid=5584 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.777000 audit[5601]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:30:12.777000 audit[5601]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe1bdc90a0 a2=0 a3=7ffe1bdc908c items=0 ppid=2911 pid=5601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:12.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:30:12.781344 systemd[1]: sshd@16-10.0.0.47:22-10.0.0.1:45230.service: Deactivated successfully. Jan 15 00:30:12.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.47:22-10.0.0.1:45230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:12.784000 audit[5601]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:30:12.784172 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 00:30:12.786337 systemd-logind[1646]: Session 17 logged out. Waiting for processes to exit. Jan 15 00:30:12.784000 audit[5601]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe1bdc90a0 a2=0 a3=0 items=0 ppid=2911 pid=5601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:12.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:30:12.790316 systemd-logind[1646]: Removed session 17. Jan 15 00:30:12.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.47:22-10.0.0.1:56320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:12.793274 systemd[1]: Started sshd@17-10.0.0.47:22-10.0.0.1:56320.service - OpenSSH per-connection server daemon (10.0.0.1:56320). Jan 15 00:30:12.834000 audit[5607]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:30:12.834000 audit[5607]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffeee7be9f0 a2=0 a3=7ffeee7be9dc items=0 ppid=2911 pid=5607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:12.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:30:12.844000 audit[5607]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:30:12.844000 audit[5607]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeee7be9f0 a2=0 a3=0 items=0 ppid=2911 pid=5607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:12.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:30:12.883000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.885109 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 56320 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:12.885000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.885000 audit[5605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc266899d0 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:12.885000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:12.887716 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:12.897336 systemd-logind[1646]: New session 18 of user core. Jan 15 00:30:12.903093 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 00:30:12.908000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:12.913000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.263160 sshd[5610]: Connection closed by 10.0.0.1 port 56320 Jan 15 00:30:13.265686 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:13.268000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.268000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.283325 systemd[1]: sshd@17-10.0.0.47:22-10.0.0.1:56320.service: Deactivated successfully. Jan 15 00:30:13.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.47:22-10.0.0.1:56320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:13.290020 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 00:30:13.292231 systemd-logind[1646]: Session 18 logged out. Waiting for processes to exit. Jan 15 00:30:13.298300 systemd-logind[1646]: Removed session 18. Jan 15 00:30:13.304389 systemd[1]: Started sshd@18-10.0.0.47:22-10.0.0.1:56334.service - OpenSSH per-connection server daemon (10.0.0.1:56334). Jan 15 00:30:13.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.47:22-10.0.0.1:56334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:13.343864 kubelet[2801]: E0115 00:30:13.343548 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" podUID="aa219502-b65d-488f-aa83-975822920d6e" Jan 15 00:30:13.415000 audit[5621]: USER_ACCT pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.417888 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 56334 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:13.418000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.418000 audit[5621]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc122a63b0 a2=3 a3=0 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:13.418000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:13.420134 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:13.437922 systemd-logind[1646]: New session 19 of user core. Jan 15 00:30:13.441512 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 00:30:13.448000 audit[5621]: USER_START pid=5621 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.452000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.586278 sshd[5624]: Connection closed by 10.0.0.1 port 56334 Jan 15 00:30:13.586366 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:13.587000 audit[5621]: USER_END pid=5621 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.588000 audit[5621]: CRED_DISP pid=5621 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:13.593503 systemd[1]: sshd@18-10.0.0.47:22-10.0.0.1:56334.service: Deactivated successfully. Jan 15 00:30:13.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.47:22-10.0.0.1:56334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:13.597694 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 00:30:13.601531 systemd-logind[1646]: Session 19 logged out. Waiting for processes to exit. Jan 15 00:30:13.606607 systemd-logind[1646]: Removed session 19. Jan 15 00:30:15.345504 kubelet[2801]: E0115 00:30:15.345373 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:30:15.348593 kubelet[2801]: E0115 00:30:15.348090 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:30:16.337034 kubelet[2801]: E0115 00:30:16.336961 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:30:18.338521 containerd[1681]: time="2026-01-15T00:30:18.338371522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:30:18.476874 containerd[1681]: time="2026-01-15T00:30:18.476519224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:18.479679 containerd[1681]: time="2026-01-15T00:30:18.479606190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:30:18.479679 containerd[1681]: time="2026-01-15T00:30:18.479643838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:18.480205 kubelet[2801]: E0115 00:30:18.480021 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:30:18.480205 kubelet[2801]: E0115 00:30:18.480120 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:30:18.480750 kubelet[2801]: E0115 00:30:18.480259 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:61a58bd17c8b4c158bfe37780c202536,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wr9gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74646ff747-bfw76_calico-system(3f98e226-1bab-40f4-84c7-2ec1cf926463): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:18.483631 containerd[1681]: time="2026-01-15T00:30:18.483559252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:30:18.552205 containerd[1681]: time="2026-01-15T00:30:18.551960373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:18.554122 containerd[1681]: time="2026-01-15T00:30:18.553981064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:30:18.554122 containerd[1681]: time="2026-01-15T00:30:18.554086300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:18.554678 kubelet[2801]: E0115 00:30:18.554567 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:30:18.554678 kubelet[2801]: E0115 00:30:18.554646 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:30:18.555173 kubelet[2801]: E0115 00:30:18.555016 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr9gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74646ff747-bfw76_calico-system(3f98e226-1bab-40f4-84c7-2ec1cf926463): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:18.556735 kubelet[2801]: E0115 00:30:18.556677 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74646ff747-bfw76" podUID="3f98e226-1bab-40f4-84c7-2ec1cf926463" Jan 15 00:30:18.625713 systemd[1]: Started sshd@19-10.0.0.47:22-10.0.0.1:56346.service - OpenSSH per-connection server daemon (10.0.0.1:56346). Jan 15 00:30:18.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.47:22-10.0.0.1:56346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:18.645095 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 15 00:30:18.645217 kernel: audit: type=1130 audit(1768437018.629:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.47:22-10.0.0.1:56346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:18.773000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:18.775293 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 56346 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:18.777363 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:18.789890 systemd-logind[1646]: New session 20 of user core. Jan 15 00:30:18.775000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:18.809525 kernel: audit: type=1101 audit(1768437018.773:848): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:18.809680 kernel: audit: type=1103 audit(1768437018.775:849): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:18.809953 kernel: audit: type=1006 audit(1768437018.776:850): pid=5640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 15 00:30:18.776000 audit[5640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9ed931e0 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:18.851891 kernel: audit: type=1300 audit(1768437018.776:850): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9ed931e0 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:18.851976 kernel: audit: type=1327 audit(1768437018.776:850): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:18.776000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:18.852352 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 00:30:18.859000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:18.877170 kernel: audit: type=1105 audit(1768437018.859:851): pid=5640 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:18.877300 kernel: audit: type=1103 audit(1768437018.863:852): pid=5643 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:18.863000 audit[5643]: CRED_ACQ pid=5643 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:19.018127 sshd[5643]: Connection closed by 10.0.0.1 port 56346 Jan 15 00:30:19.019636 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:19.021000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:19.027011 systemd[1]: sshd@19-10.0.0.47:22-10.0.0.1:56346.service: Deactivated successfully. Jan 15 00:30:19.029907 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 00:30:19.032390 systemd-logind[1646]: Session 20 logged out. Waiting for processes to exit. Jan 15 00:30:19.036199 systemd-logind[1646]: Removed session 20. Jan 15 00:30:19.038871 kernel: audit: type=1106 audit(1768437019.021:853): pid=5640 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:19.021000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:19.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.47:22-10.0.0.1:56346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:19.050954 kernel: audit: type=1104 audit(1768437019.021:854): pid=5640 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:19.390000 audit[5662]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:30:19.390000 audit[5662]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3ba1d040 a2=0 a3=7ffc3ba1d02c items=0 ppid=2911 pid=5662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:19.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:30:19.403000 audit[5662]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:30:19.403000 audit[5662]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc3ba1d040 a2=0 a3=7ffc3ba1d02c items=0 ppid=2911 pid=5662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:19.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:30:20.336250 kubelet[2801]: E0115 00:30:20.336145 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:30:20.343226 kubelet[2801]: E0115 00:30:20.343146 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:30:24.037070 systemd[1]: Started sshd@20-10.0.0.47:22-10.0.0.1:35554.service - OpenSSH per-connection server daemon (10.0.0.1:35554). Jan 15 00:30:24.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.47:22-10.0.0.1:35554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:24.039909 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 15 00:30:24.039967 kernel: audit: type=1130 audit(1768437024.036:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.47:22-10.0.0.1:35554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:24.117000 audit[5665]: USER_ACCT pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.119093 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 35554 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:24.121654 sshd-session[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:24.132680 systemd-logind[1646]: New session 21 of user core. Jan 15 00:30:24.119000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.151568 kernel: audit: type=1101 audit(1768437024.117:859): pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.151685 kernel: audit: type=1103 audit(1768437024.119:860): pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.119000 audit[5665]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed7579720 a2=3 a3=0 items=0 ppid=1 pid=5665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:24.160068 kernel: audit: type=1006 audit(1768437024.119:861): pid=5665 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 15 00:30:24.160110 kernel: audit: type=1300 audit(1768437024.119:861): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed7579720 a2=3 a3=0 items=0 ppid=1 pid=5665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:24.119000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:24.180233 kernel: audit: type=1327 audit(1768437024.119:861): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:24.182372 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 00:30:24.186000 audit[5665]: USER_START pid=5665 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.190000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.218553 kernel: audit: type=1105 audit(1768437024.186:862): pid=5665 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.218725 kernel: audit: type=1103 audit(1768437024.190:863): pid=5668 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.312733 sshd[5668]: Connection closed by 10.0.0.1 port 35554 Jan 15 00:30:24.313065 sshd-session[5665]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:24.316000 audit[5665]: USER_END pid=5665 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.322217 systemd[1]: sshd@20-10.0.0.47:22-10.0.0.1:35554.service: Deactivated successfully. Jan 15 00:30:24.326483 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 00:30:24.329719 systemd-logind[1646]: Session 21 logged out. Waiting for processes to exit. Jan 15 00:30:24.332114 systemd-logind[1646]: Removed session 21. Jan 15 00:30:24.337520 kernel: audit: type=1106 audit(1768437024.316:864): pid=5665 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.337643 kernel: audit: type=1104 audit(1768437024.316:865): pid=5665 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.316000 audit[5665]: CRED_DISP pid=5665 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:24.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.47:22-10.0.0.1:35554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:26.342100 containerd[1681]: time="2026-01-15T00:30:26.342030788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:30:26.429039 containerd[1681]: time="2026-01-15T00:30:26.428949135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:26.432140 containerd[1681]: time="2026-01-15T00:30:26.430728668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:30:26.432140 containerd[1681]: time="2026-01-15T00:30:26.430877105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:26.432296 kubelet[2801]: E0115 00:30:26.431119 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:26.432296 kubelet[2801]: E0115 00:30:26.431169 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:26.432296 kubelet[2801]: E0115 00:30:26.431337 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpxxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b544d645f-gznqk_calico-apiserver(6241b949-d82f-4e04-b2b8-fdb1cda43b39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:26.433182 kubelet[2801]: E0115 00:30:26.433054 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b544d645f-gznqk" podUID="6241b949-d82f-4e04-b2b8-fdb1cda43b39" Jan 15 00:30:27.338180 containerd[1681]: time="2026-01-15T00:30:27.337239754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:30:27.405727 containerd[1681]: time="2026-01-15T00:30:27.405625055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:27.407289 containerd[1681]: time="2026-01-15T00:30:27.407120411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:30:27.407289 containerd[1681]: time="2026-01-15T00:30:27.407269909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:27.407671 kubelet[2801]: E0115 00:30:27.407593 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:27.407671 kubelet[2801]: E0115 00:30:27.407643 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:27.408057 kubelet[2801]: E0115 00:30:27.407980 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-976wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c9fcdc8c-f8t27_calico-apiserver(4b2ac921-bc9c-4449-a6c1-96910dec381e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:27.408926 containerd[1681]: time="2026-01-15T00:30:27.408502184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:30:27.409921 kubelet[2801]: E0115 00:30:27.409655 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-f8t27" podUID="4b2ac921-bc9c-4449-a6c1-96910dec381e" Jan 15 00:30:27.474098 containerd[1681]: time="2026-01-15T00:30:27.473946797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:27.475874 containerd[1681]: time="2026-01-15T00:30:27.475726852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:30:27.476120 containerd[1681]: time="2026-01-15T00:30:27.475914668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:27.476304 kubelet[2801]: E0115 00:30:27.476199 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:30:27.477235 kubelet[2801]: E0115 00:30:27.476315 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:30:27.477235 kubelet[2801]: E0115 00:30:27.476530 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl7dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b75c74db4-kxw6j_calico-system(aa219502-b65d-488f-aa83-975822920d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:27.478288 kubelet[2801]: E0115 00:30:27.478146 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b75c74db4-kxw6j" podUID="aa219502-b65d-488f-aa83-975822920d6e" Jan 15 00:30:29.337228 systemd[1]: Started sshd@21-10.0.0.47:22-10.0.0.1:35566.service - OpenSSH per-connection server daemon (10.0.0.1:35566). Jan 15 00:30:29.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.47:22-10.0.0.1:35566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:29.340281 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:30:29.340553 kernel: audit: type=1130 audit(1768437029.336:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.47:22-10.0.0.1:35566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:29.419000 audit[5683]: USER_ACCT pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.421009 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 35566 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:29.438000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.439902 kernel: audit: type=1101 audit(1768437029.419:868): pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.439974 kernel: audit: type=1103 audit(1768437029.438:869): pid=5683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.440518 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:29.448990 systemd-logind[1646]: New session 22 of user core. Jan 15 00:30:29.462990 kernel: audit: type=1006 audit(1768437029.438:870): pid=5683 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 15 00:30:29.463089 kernel: audit: type=1300 audit(1768437029.438:870): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbd98d3f0 a2=3 a3=0 items=0 ppid=1 pid=5683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:29.438000 audit[5683]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbd98d3f0 a2=3 a3=0 items=0 ppid=1 pid=5683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:29.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:29.483460 kernel: audit: type=1327 audit(1768437029.438:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:29.485141 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 00:30:29.488000 audit[5683]: USER_START pid=5683 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.492000 audit[5686]: CRED_ACQ pid=5686 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.529182 kernel: audit: type=1105 audit(1768437029.488:871): pid=5683 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.529306 kernel: audit: type=1103 audit(1768437029.492:872): pid=5686 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.619453 sshd[5686]: Connection closed by 10.0.0.1 port 35566 Jan 15 00:30:29.620691 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:29.623000 audit[5683]: USER_END pid=5683 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.628520 systemd-logind[1646]: Session 22 logged out. Waiting for processes to exit. Jan 15 00:30:29.628930 systemd[1]: sshd@21-10.0.0.47:22-10.0.0.1:35566.service: Deactivated successfully. Jan 15 00:30:29.632961 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 00:30:29.636590 systemd-logind[1646]: Removed session 22. Jan 15 00:30:29.623000 audit[5683]: CRED_DISP pid=5683 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.656968 kernel: audit: type=1106 audit(1768437029.623:873): pid=5683 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.657104 kernel: audit: type=1104 audit(1768437029.623:874): pid=5683 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:29.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.47:22-10.0.0.1:35566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:30.336880 containerd[1681]: time="2026-01-15T00:30:30.336623092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:30:30.414918 containerd[1681]: time="2026-01-15T00:30:30.414708159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:30.416828 containerd[1681]: time="2026-01-15T00:30:30.416705338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:30:30.416986 containerd[1681]: time="2026-01-15T00:30:30.416935789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:30.417348 kubelet[2801]: E0115 00:30:30.417135 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:30.417348 kubelet[2801]: E0115 00:30:30.417253 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:30:30.417961 kubelet[2801]: E0115 00:30:30.417528 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54hkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c9fcdc8c-wjfr8_calico-apiserver(ab27570c-5eb0-4b1f-9c2f-ecefc027b548): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:30.419335 kubelet[2801]: E0115 00:30:30.419245 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c9fcdc8c-wjfr8" podUID="ab27570c-5eb0-4b1f-9c2f-ecefc027b548" Jan 15 00:30:33.338449 kubelet[2801]: E0115 00:30:33.338285 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74646ff747-bfw76" podUID="3f98e226-1bab-40f4-84c7-2ec1cf926463" Jan 15 00:30:34.342591 containerd[1681]: time="2026-01-15T00:30:34.342474209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:30:34.406036 containerd[1681]: time="2026-01-15T00:30:34.405871200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:34.407528 containerd[1681]: time="2026-01-15T00:30:34.407450536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:30:34.407600 containerd[1681]: time="2026-01-15T00:30:34.407559709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:34.408181 kubelet[2801]: E0115 00:30:34.408095 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:30:34.408181 kubelet[2801]: E0115 00:30:34.408152 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:30:34.408729 kubelet[2801]: E0115 00:30:34.408353 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8bmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:34.409122 containerd[1681]: time="2026-01-15T00:30:34.408731148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:30:34.481810 containerd[1681]: time="2026-01-15T00:30:34.481716093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:34.483641 containerd[1681]: time="2026-01-15T00:30:34.483506635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:30:34.483746 containerd[1681]: time="2026-01-15T00:30:34.483637945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:34.483967 kubelet[2801]: E0115 00:30:34.483882 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:30:34.483967 kubelet[2801]: E0115 00:30:34.483932 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:30:34.484324 kubelet[2801]: E0115 00:30:34.484189 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bp5qp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9sr6x_calico-system(0da1261a-c922-41a4-a50d-63daf18be31b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:34.484707 containerd[1681]: time="2026-01-15T00:30:34.484542339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:30:34.486142 kubelet[2801]: E0115 00:30:34.485867 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9sr6x" podUID="0da1261a-c922-41a4-a50d-63daf18be31b" Jan 15 00:30:34.546584 containerd[1681]: time="2026-01-15T00:30:34.546215326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:30:34.549024 containerd[1681]: time="2026-01-15T00:30:34.548564869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:30:34.549024 containerd[1681]: time="2026-01-15T00:30:34.548742931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:30:34.549338 kubelet[2801]: E0115 00:30:34.549223 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:30:34.549338 kubelet[2801]: E0115 00:30:34.549295 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:30:34.549523 kubelet[2801]: E0115 00:30:34.549477 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8bmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z9lkl_calico-system(44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:30:34.551128 kubelet[2801]: E0115 00:30:34.550877 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z9lkl" podUID="44d6a0a4-cff1-4f36-aee4-f6ca9d02fb19" Jan 15 00:30:34.640299 systemd[1]: Started sshd@22-10.0.0.47:22-10.0.0.1:56422.service - OpenSSH per-connection server daemon (10.0.0.1:56422). Jan 15 00:30:34.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.47:22-10.0.0.1:56422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:34.645204 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:30:34.645263 kernel: audit: type=1130 audit(1768437034.639:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.47:22-10.0.0.1:56422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:34.723000 audit[5701]: USER_ACCT pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.724899 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 56422 ssh2: RSA SHA256:MVAJAIEgN+El/bX2Cf1mjVR83nhPTGqntdRAeQlZf1I Jan 15 00:30:34.744979 kernel: audit: type=1101 audit(1768437034.723:877): pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.744000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.745953 sshd-session[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:34.755855 systemd-logind[1646]: New session 23 of user core. Jan 15 00:30:34.765232 kernel: audit: type=1103 audit(1768437034.744:878): pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.765385 kernel: audit: type=1006 audit(1768437034.744:879): pid=5701 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 15 00:30:34.765517 kernel: audit: type=1300 audit(1768437034.744:879): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4de56f70 a2=3 a3=0 items=0 ppid=1 pid=5701 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:34.744000 audit[5701]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4de56f70 a2=3 a3=0 items=0 ppid=1 pid=5701 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:34.744000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:34.784117 kernel: audit: type=1327 audit(1768437034.744:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:34.786473 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 00:30:34.791000 audit[5701]: USER_START pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.795000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.827192 kernel: audit: type=1105 audit(1768437034.791:880): pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.827281 kernel: audit: type=1103 audit(1768437034.795:881): pid=5704 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.907650 sshd[5704]: Connection closed by 10.0.0.1 port 56422 Jan 15 00:30:34.908498 sshd-session[5701]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:34.910000 audit[5701]: USER_END pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.917956 systemd[1]: sshd@22-10.0.0.47:22-10.0.0.1:56422.service: Deactivated successfully. Jan 15 00:30:34.921213 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 00:30:34.922988 systemd-logind[1646]: Session 23 logged out. Waiting for processes to exit. Jan 15 00:30:34.925492 systemd-logind[1646]: Removed session 23. Jan 15 00:30:34.910000 audit[5701]: CRED_DISP pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.950632 kernel: audit: type=1106 audit(1768437034.910:882): pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.950737 kernel: audit: type=1104 audit(1768437034.910:883): pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:30:34.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.47:22-10.0.0.1:56422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'