Jan 28 06:55:04.281248 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 28 04:05:06 -00 2026 Jan 28 06:55:04.281306 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ede6474d93f89ce5b937430958316ce45b515ef3bd53609be944197fc2bc9aa6 Jan 28 06:55:04.281366 kernel: BIOS-provided physical RAM map: Jan 28 06:55:04.281379 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 06:55:04.281396 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 06:55:04.281407 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 06:55:04.281419 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 28 06:55:04.281437 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 28 06:55:04.281449 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 06:55:04.281460 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 06:55:04.281471 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 06:55:04.281482 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 06:55:04.281493 kernel: NX (Execute Disable) protection: active Jan 28 06:55:04.281510 kernel: APIC: Static calls initialized Jan 28 06:55:04.281523 kernel: SMBIOS 2.8 present. Jan 28 06:55:04.281536 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 28 06:55:04.281548 kernel: DMI: Memory slots populated: 1/1 Jan 28 06:55:04.281565 kernel: Hypervisor detected: KVM Jan 28 06:55:04.281577 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 06:55:04.281589 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 06:55:04.281601 kernel: kvm-clock: using sched offset of 5050902336 cycles Jan 28 06:55:04.281614 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 06:55:04.281626 kernel: tsc: Detected 2500.032 MHz processor Jan 28 06:55:04.281639 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 06:55:04.281652 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 06:55:04.281669 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 06:55:04.281682 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 06:55:04.281694 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 06:55:04.281706 kernel: Using GB pages for direct mapping Jan 28 06:55:04.281718 kernel: ACPI: Early table checksum verification disabled Jan 28 06:55:04.281730 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 28 06:55:04.281743 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:55:04.281755 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:55:04.281773 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:55:04.281785 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 28 06:55:04.281797 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:55:04.281809 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:55:04.281821 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:55:04.281834 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:55:04.281846 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 28 06:55:04.281878 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 28 06:55:04.281892 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 28 06:55:04.281904 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 28 06:55:04.281917 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 28 06:55:04.281964 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 28 06:55:04.281983 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 28 06:55:04.281996 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 28 06:55:04.282008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 28 06:55:04.282021 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 28 06:55:04.282034 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 28 06:55:04.282047 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 28 06:55:04.282072 kernel: Zone ranges: Jan 28 06:55:04.282085 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 06:55:04.282098 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 28 06:55:04.282111 kernel: Normal empty Jan 28 06:55:04.282123 kernel: Device empty Jan 28 06:55:04.282136 kernel: Movable zone start for each node Jan 28 06:55:04.282149 kernel: Early memory node ranges Jan 28 06:55:04.282161 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 06:55:04.282185 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 28 06:55:04.282198 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 28 06:55:04.282210 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 06:55:04.282223 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 06:55:04.282236 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 28 06:55:04.282248 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 06:55:04.282265 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 06:55:04.282289 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 06:55:04.282302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 06:55:04.282315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 06:55:04.282328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 06:55:04.282351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 06:55:04.282364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 06:55:04.282377 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 06:55:04.282401 kernel: TSC deadline timer available Jan 28 06:55:04.282415 kernel: CPU topo: Max. logical packages: 16 Jan 28 06:55:04.282427 kernel: CPU topo: Max. logical dies: 16 Jan 28 06:55:04.282440 kernel: CPU topo: Max. dies per package: 1 Jan 28 06:55:04.282452 kernel: CPU topo: Max. threads per core: 1 Jan 28 06:55:04.282465 kernel: CPU topo: Num. cores per package: 1 Jan 28 06:55:04.282478 kernel: CPU topo: Num. threads per package: 1 Jan 28 06:55:04.282490 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 28 06:55:04.282514 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 06:55:04.282527 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 06:55:04.282539 kernel: Booting paravirtualized kernel on KVM Jan 28 06:55:04.282552 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 06:55:04.282565 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 28 06:55:04.282578 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 28 06:55:04.282591 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 28 06:55:04.282614 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 28 06:55:04.282628 kernel: kvm-guest: PV spinlocks enabled Jan 28 06:55:04.282641 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 06:55:04.282655 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ede6474d93f89ce5b937430958316ce45b515ef3bd53609be944197fc2bc9aa6 Jan 28 06:55:04.282668 kernel: random: crng init done Jan 28 06:55:04.282681 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 06:55:04.282693 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 06:55:04.282729 kernel: Fallback order for Node 0: 0 Jan 28 06:55:04.282742 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 28 06:55:04.282754 kernel: Policy zone: DMA32 Jan 28 06:55:04.282779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 06:55:04.282792 kernel: software IO TLB: area num 16. Jan 28 06:55:04.282805 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 28 06:55:04.282818 kernel: Kernel/User page tables isolation: enabled Jan 28 06:55:04.282841 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 06:55:04.282854 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 06:55:04.282867 kernel: Dynamic Preempt: voluntary Jan 28 06:55:04.282880 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 06:55:04.282893 kernel: rcu: RCU event tracing is enabled. Jan 28 06:55:04.282906 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 28 06:55:04.282919 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 06:55:04.282958 kernel: Rude variant of Tasks RCU enabled. Jan 28 06:55:04.282974 kernel: Tracing variant of Tasks RCU enabled. Jan 28 06:55:04.282987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 06:55:04.283000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 28 06:55:04.283013 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 06:55:04.283026 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 06:55:04.283039 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 06:55:04.283072 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 28 06:55:04.283086 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 06:55:04.283124 kernel: Console: colour VGA+ 80x25 Jan 28 06:55:04.283147 kernel: printk: legacy console [tty0] enabled Jan 28 06:55:04.283161 kernel: printk: legacy console [ttyS0] enabled Jan 28 06:55:04.283178 kernel: ACPI: Core revision 20240827 Jan 28 06:55:04.283193 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 06:55:04.283206 kernel: x2apic enabled Jan 28 06:55:04.283220 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 06:55:04.283233 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 28 06:55:04.283258 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Jan 28 06:55:04.283272 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 06:55:04.283285 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 28 06:55:04.283298 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 28 06:55:04.283321 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 06:55:04.283335 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 06:55:04.283358 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 06:55:04.283371 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 28 06:55:04.283384 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 28 06:55:04.283397 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 28 06:55:04.283409 kernel: MDS: Mitigation: Clear CPU buffers Jan 28 06:55:04.283422 kernel: MMIO Stale Data: Unknown: No mitigations Jan 28 06:55:04.283435 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 28 06:55:04.283448 kernel: active return thunk: its_return_thunk Jan 28 06:55:04.283472 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 06:55:04.283487 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 06:55:04.283500 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 06:55:04.283513 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 06:55:04.283526 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 06:55:04.283539 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 28 06:55:04.283552 kernel: Freeing SMP alternatives memory: 32K Jan 28 06:55:04.283565 kernel: pid_max: default: 32768 minimum: 301 Jan 28 06:55:04.283578 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 06:55:04.283591 kernel: landlock: Up and running. Jan 28 06:55:04.283615 kernel: SELinux: Initializing. Jan 28 06:55:04.283629 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 06:55:04.283642 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 06:55:04.283655 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 28 06:55:04.283668 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 28 06:55:04.283682 kernel: signal: max sigframe size: 1776 Jan 28 06:55:04.283695 kernel: rcu: Hierarchical SRCU implementation. Jan 28 06:55:04.283709 kernel: rcu: Max phase no-delay instances is 400. Jan 28 06:55:04.283722 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 28 06:55:04.283747 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 06:55:04.283761 kernel: smp: Bringing up secondary CPUs ... Jan 28 06:55:04.283774 kernel: smpboot: x86: Booting SMP configuration: Jan 28 06:55:04.283787 kernel: .... node #0, CPUs: #1 Jan 28 06:55:04.283800 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 06:55:04.283813 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Jan 28 06:55:04.283827 kernel: Memory: 1912056K/2096616K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 178544K reserved, 0K cma-reserved) Jan 28 06:55:04.283852 kernel: devtmpfs: initialized Jan 28 06:55:04.283866 kernel: x86/mm: Memory block size: 128MB Jan 28 06:55:04.283880 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 06:55:04.283893 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 28 06:55:04.283906 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 06:55:04.283919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 06:55:04.283933 kernel: audit: initializing netlink subsys (disabled) Jan 28 06:55:04.290988 kernel: audit: type=2000 audit(1769583299.843:1): state=initialized audit_enabled=0 res=1 Jan 28 06:55:04.291004 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 06:55:04.291019 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 06:55:04.291032 kernel: cpuidle: using governor menu Jan 28 06:55:04.291046 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 06:55:04.291060 kernel: dca service started, version 1.12.1 Jan 28 06:55:04.291081 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 28 06:55:04.291108 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 06:55:04.291122 kernel: PCI: Using configuration type 1 for base access Jan 28 06:55:04.291136 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 06:55:04.291150 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 06:55:04.291163 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 06:55:04.291177 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 06:55:04.291191 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 06:55:04.291215 kernel: ACPI: Added _OSI(Module Device) Jan 28 06:55:04.291230 kernel: ACPI: Added _OSI(Processor Device) Jan 28 06:55:04.291243 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 06:55:04.291257 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 06:55:04.291270 kernel: ACPI: Interpreter enabled Jan 28 06:55:04.291283 kernel: ACPI: PM: (supports S0 S5) Jan 28 06:55:04.291296 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 06:55:04.291320 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 06:55:04.291335 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 06:55:04.291359 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 06:55:04.291372 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 06:55:04.291700 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 06:55:04.292006 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 28 06:55:04.292256 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 28 06:55:04.292278 kernel: PCI host bridge to bus 0000:00 Jan 28 06:55:04.292528 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 06:55:04.292735 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 06:55:04.292937 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 06:55:04.293165 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 28 06:55:04.293396 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 06:55:04.293601 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 28 06:55:04.293803 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 06:55:04.297173 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 06:55:04.297449 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 28 06:55:04.297695 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 28 06:55:04.297918 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 28 06:55:04.298181 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 28 06:55:04.298417 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 06:55:04.298670 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.298894 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 28 06:55:04.304957 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 06:55:04.305214 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 06:55:04.305454 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 06:55:04.305702 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.305925 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 28 06:55:04.306162 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 06:55:04.306417 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 06:55:04.306637 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 06:55:04.306878 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.309159 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 28 06:55:04.309410 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 06:55:04.309640 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 06:55:04.309884 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 06:55:04.310139 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.310376 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 28 06:55:04.310598 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 06:55:04.310818 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 06:55:04.313085 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 06:55:04.313367 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.313607 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 28 06:55:04.313853 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 06:55:04.314100 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 06:55:04.314323 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 06:55:04.314574 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.314816 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 28 06:55:04.316088 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 06:55:04.316322 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 06:55:04.316574 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 06:55:04.316811 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.317099 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 28 06:55:04.317324 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 06:55:04.317560 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 06:55:04.317782 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 06:55:04.319068 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 06:55:04.319326 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 28 06:55:04.319568 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 06:55:04.319791 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 06:55:04.320036 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 06:55:04.320272 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 06:55:04.320511 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 28 06:55:04.320752 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 28 06:55:04.321309 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 28 06:55:04.322092 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 28 06:55:04.322347 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 06:55:04.322576 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 28 06:55:04.322807 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 28 06:55:04.323085 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 28 06:55:04.323318 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 06:55:04.323555 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 06:55:04.323798 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 06:55:04.324260 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 28 06:55:04.324508 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 28 06:55:04.324763 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 06:55:04.325082 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 28 06:55:04.325325 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 28 06:55:04.325570 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 28 06:55:04.325795 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 06:55:04.326060 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 06:55:04.326325 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 06:55:04.326581 kernel: pci_bus 0000:02: extended config space not accessible Jan 28 06:55:04.326833 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 28 06:55:04.327083 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 28 06:55:04.327309 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 06:55:04.327578 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 28 06:55:04.327805 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 28 06:55:04.328044 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 06:55:04.328283 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 28 06:55:04.328521 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 28 06:55:04.328761 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 06:55:04.329017 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 06:55:04.329238 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 06:55:04.329473 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 06:55:04.329693 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 06:55:04.329911 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 06:55:04.329965 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 06:55:04.329981 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 06:55:04.329995 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 06:55:04.330009 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 06:55:04.330023 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 06:55:04.330043 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 06:55:04.330058 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 06:55:04.330084 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 06:55:04.330098 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 06:55:04.330112 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 06:55:04.330126 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 06:55:04.330140 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 06:55:04.330153 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 06:55:04.330167 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 06:55:04.330191 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 06:55:04.330205 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 06:55:04.330219 kernel: iommu: Default domain type: Translated Jan 28 06:55:04.330233 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 06:55:04.330247 kernel: PCI: Using ACPI for IRQ routing Jan 28 06:55:04.330261 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 06:55:04.330274 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 06:55:04.330298 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 28 06:55:04.330532 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 06:55:04.330752 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 06:55:04.330987 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 06:55:04.331009 kernel: vgaarb: loaded Jan 28 06:55:04.331023 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 06:55:04.331036 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 06:55:04.331066 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 06:55:04.331080 kernel: pnp: PnP ACPI init Jan 28 06:55:04.331318 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 06:55:04.331351 kernel: pnp: PnP ACPI: found 5 devices Jan 28 06:55:04.331366 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 06:55:04.331380 kernel: NET: Registered PF_INET protocol family Jan 28 06:55:04.331408 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 06:55:04.331423 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 28 06:55:04.331436 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 06:55:04.331450 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 06:55:04.331464 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 28 06:55:04.331478 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 28 06:55:04.331492 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 06:55:04.331516 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 06:55:04.331531 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 06:55:04.331545 kernel: NET: Registered PF_XDP protocol family Jan 28 06:55:04.331763 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 28 06:55:04.332015 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 28 06:55:04.332235 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 28 06:55:04.332560 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 28 06:55:04.332912 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 28 06:55:04.333926 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 28 06:55:04.334169 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 28 06:55:04.334403 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 28 06:55:04.334625 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 28 06:55:04.334844 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 28 06:55:04.335103 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 28 06:55:04.335323 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 28 06:55:04.335557 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 28 06:55:04.335775 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 28 06:55:04.336023 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 28 06:55:04.336243 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 28 06:55:04.336888 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 06:55:04.337488 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 06:55:04.337731 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 06:55:04.337983 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 28 06:55:04.338206 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 06:55:04.338440 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 06:55:04.338660 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 06:55:04.340089 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 28 06:55:04.343017 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 06:55:04.343263 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 06:55:04.343509 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 06:55:04.343735 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 28 06:55:04.343978 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 06:55:04.344236 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 06:55:04.344474 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 06:55:04.344694 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 28 06:55:04.344930 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 06:55:04.351209 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 06:55:04.351485 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 06:55:04.351711 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 28 06:55:04.357642 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 06:55:04.357908 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 06:55:04.358183 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 06:55:04.358425 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 28 06:55:04.358668 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 06:55:04.358891 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 06:55:04.360723 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 06:55:04.360979 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 28 06:55:04.361206 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 06:55:04.361444 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 06:55:04.361695 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 06:55:04.361935 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 28 06:55:04.364079 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 06:55:04.364305 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 06:55:04.364531 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 06:55:04.364735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 06:55:04.364939 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 06:55:04.365174 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 28 06:55:04.365409 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 06:55:04.365613 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 28 06:55:04.365838 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 28 06:55:04.368088 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 28 06:55:04.368302 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 06:55:04.368536 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 28 06:55:04.368777 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 28 06:55:04.369004 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 28 06:55:04.369228 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 06:55:04.369470 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 28 06:55:04.369678 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 28 06:55:04.369904 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 06:55:04.372162 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 28 06:55:04.372387 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 28 06:55:04.372597 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 06:55:04.372824 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 28 06:55:04.373069 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 28 06:55:04.373298 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 06:55:04.373533 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 28 06:55:04.373742 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 28 06:55:04.373968 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 06:55:04.374191 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 28 06:55:04.374412 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 28 06:55:04.374643 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 06:55:04.374868 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 28 06:55:04.375111 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 28 06:55:04.375319 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 06:55:04.375353 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 06:55:04.375384 kernel: PCI: CLS 0 bytes, default 64 Jan 28 06:55:04.375400 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 28 06:55:04.375415 kernel: software IO TLB: mapped [mem 0x0000000075000000-0x0000000079000000] (64MB) Jan 28 06:55:04.375429 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 28 06:55:04.375444 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 28 06:55:04.375459 kernel: Initialise system trusted keyrings Jan 28 06:55:04.375474 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 28 06:55:04.375499 kernel: Key type asymmetric registered Jan 28 06:55:04.375514 kernel: Asymmetric key parser 'x509' registered Jan 28 06:55:04.375528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 06:55:04.375542 kernel: io scheduler mq-deadline registered Jan 28 06:55:04.375557 kernel: io scheduler kyber registered Jan 28 06:55:04.375571 kernel: io scheduler bfq registered Jan 28 06:55:04.375796 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 28 06:55:04.376062 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 28 06:55:04.376309 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.376546 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 28 06:55:04.376820 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 28 06:55:04.377076 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.377316 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 28 06:55:04.377553 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 28 06:55:04.377772 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.378011 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 28 06:55:04.378233 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 28 06:55:04.378470 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.378709 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 28 06:55:04.378929 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 28 06:55:04.379181 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.379418 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 28 06:55:04.379638 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 28 06:55:04.379899 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.380142 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 28 06:55:04.380374 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 28 06:55:04.380596 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.380816 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 28 06:55:04.381086 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 28 06:55:04.381307 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 06:55:04.381329 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 06:55:04.381356 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 06:55:04.381371 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 06:55:04.381386 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 06:55:04.381415 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 06:55:04.381430 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 06:55:04.381445 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 06:55:04.381459 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 06:55:04.381474 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 06:55:04.381708 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 28 06:55:04.381959 kernel: rtc_cmos 00:03: registered as rtc0 Jan 28 06:55:04.382177 kernel: rtc_cmos 00:03: setting system clock to 2026-01-28T06:55:02 UTC (1769583302) Jan 28 06:55:04.382424 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 28 06:55:04.382446 kernel: intel_pstate: CPU model not supported Jan 28 06:55:04.382461 kernel: NET: Registered PF_INET6 protocol family Jan 28 06:55:04.382475 kernel: Segment Routing with IPv6 Jan 28 06:55:04.382490 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 06:55:04.382520 kernel: NET: Registered PF_PACKET protocol family Jan 28 06:55:04.382535 kernel: Key type dns_resolver registered Jan 28 06:55:04.382550 kernel: IPI shorthand broadcast: enabled Jan 28 06:55:04.382564 kernel: sched_clock: Marking stable (2235003597, 226087540)->(2585574297, -124483160) Jan 28 06:55:04.382579 kernel: registered taskstats version 1 Jan 28 06:55:04.382593 kernel: Loading compiled-in X.509 certificates Jan 28 06:55:04.382607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 60cf4c6c8cc6ec3eb800b1f9cf1d8cc38776b17f' Jan 28 06:55:04.382633 kernel: Demotion targets for Node 0: null Jan 28 06:55:04.382648 kernel: Key type .fscrypt registered Jan 28 06:55:04.382662 kernel: Key type fscrypt-provisioning registered Jan 28 06:55:04.382677 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 06:55:04.382691 kernel: ima: Allocated hash algorithm: sha1 Jan 28 06:55:04.382706 kernel: ima: No architecture policies found Jan 28 06:55:04.382720 kernel: clk: Disabling unused clocks Jan 28 06:55:04.382745 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 28 06:55:04.382760 kernel: Write protecting the kernel read-only data: 47104k Jan 28 06:55:04.382775 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 06:55:04.382789 kernel: Run /init as init process Jan 28 06:55:04.382804 kernel: with arguments: Jan 28 06:55:04.382818 kernel: /init Jan 28 06:55:04.382832 kernel: with environment: Jan 28 06:55:04.382846 kernel: HOME=/ Jan 28 06:55:04.382878 kernel: TERM=linux Jan 28 06:55:04.382892 kernel: ACPI: bus type USB registered Jan 28 06:55:04.382907 kernel: usbcore: registered new interface driver usbfs Jan 28 06:55:04.382921 kernel: usbcore: registered new interface driver hub Jan 28 06:55:04.382936 kernel: usbcore: registered new device driver usb Jan 28 06:55:04.383197 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 06:55:04.383439 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 28 06:55:04.383690 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 28 06:55:04.383915 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 06:55:04.384160 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 28 06:55:04.384400 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 28 06:55:04.384689 kernel: hub 1-0:1.0: USB hub found Jan 28 06:55:04.384932 kernel: hub 1-0:1.0: 4 ports detected Jan 28 06:55:04.385233 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 28 06:55:04.385514 kernel: hub 2-0:1.0: USB hub found Jan 28 06:55:04.385766 kernel: hub 2-0:1.0: 4 ports detected Jan 28 06:55:04.385788 kernel: SCSI subsystem initialized Jan 28 06:55:04.385803 kernel: libata version 3.00 loaded. Jan 28 06:55:04.386102 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 06:55:04.386126 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 06:55:04.386352 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 06:55:04.386574 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 06:55:04.386792 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 06:55:04.387078 kernel: scsi host0: ahci Jan 28 06:55:04.387343 kernel: scsi host1: ahci Jan 28 06:55:04.387589 kernel: scsi host2: ahci Jan 28 06:55:04.387854 kernel: scsi host3: ahci Jan 28 06:55:04.388110 kernel: scsi host4: ahci Jan 28 06:55:04.388356 kernel: scsi host5: ahci Jan 28 06:55:04.388394 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 28 06:55:04.388409 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 28 06:55:04.388424 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 28 06:55:04.388439 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 28 06:55:04.388453 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 28 06:55:04.388468 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 28 06:55:04.388732 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 28 06:55:04.388771 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 28 06:55:04.388786 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 06:55:04.388801 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 06:55:04.388815 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 06:55:04.388829 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 06:55:04.388844 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 06:55:04.388869 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 06:55:04.389142 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 28 06:55:04.389166 kernel: usbcore: registered new interface driver usbhid Jan 28 06:55:04.389394 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 28 06:55:04.389417 kernel: usbhid: USB HID core driver Jan 28 06:55:04.389432 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 06:55:04.389462 kernel: GPT:25804799 != 125829119 Jan 28 06:55:04.389477 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 06:55:04.389492 kernel: GPT:25804799 != 125829119 Jan 28 06:55:04.389506 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 06:55:04.389521 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 28 06:55:04.389535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 06:55:04.389814 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 28 06:55:04.389853 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 06:55:04.389868 kernel: device-mapper: uevent: version 1.0.3 Jan 28 06:55:04.389883 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 06:55:04.389898 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 06:55:04.389912 kernel: raid6: sse2x4 gen() 13570 MB/s Jan 28 06:55:04.389927 kernel: raid6: sse2x2 gen() 9278 MB/s Jan 28 06:55:04.389968 kernel: raid6: sse2x1 gen() 9613 MB/s Jan 28 06:55:04.389984 kernel: raid6: using algorithm sse2x4 gen() 13570 MB/s Jan 28 06:55:04.389999 kernel: raid6: .... xor() 7860 MB/s, rmw enabled Jan 28 06:55:04.390013 kernel: raid6: using ssse3x2 recovery algorithm Jan 28 06:55:04.390027 kernel: xor: automatically using best checksumming function avx Jan 28 06:55:04.390042 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 06:55:04.390057 kernel: BTRFS: device fsid d4cc183a-4a92-40c5-bcbb-0af9ab626d3e devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (194) Jan 28 06:55:04.390072 kernel: BTRFS info (device dm-0): first mount of filesystem d4cc183a-4a92-40c5-bcbb-0af9ab626d3e Jan 28 06:55:04.390099 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:55:04.390114 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 06:55:04.390128 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 06:55:04.390142 kernel: loop: module loaded Jan 28 06:55:04.390157 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 06:55:04.390171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 06:55:04.390192 systemd[1]: Successfully made /usr/ read-only. Jan 28 06:55:04.390222 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 06:55:04.390238 systemd[1]: Detected virtualization kvm. Jan 28 06:55:04.390252 systemd[1]: Detected architecture x86-64. Jan 28 06:55:04.390267 systemd[1]: Running in initrd. Jan 28 06:55:04.390282 systemd[1]: No hostname configured, using default hostname. Jan 28 06:55:04.390309 systemd[1]: Hostname set to . Jan 28 06:55:04.390325 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 06:55:04.390351 systemd[1]: Queued start job for default target initrd.target. Jan 28 06:55:04.390368 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 06:55:04.390383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:55:04.390398 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:55:04.390415 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 06:55:04.390443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 06:55:04.390460 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 06:55:04.390476 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 06:55:04.390492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:55:04.390507 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:55:04.390533 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 06:55:04.390549 systemd[1]: Reached target paths.target - Path Units. Jan 28 06:55:04.390565 systemd[1]: Reached target slices.target - Slice Units. Jan 28 06:55:04.390580 systemd[1]: Reached target swap.target - Swaps. Jan 28 06:55:04.390596 systemd[1]: Reached target timers.target - Timer Units. Jan 28 06:55:04.390611 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 06:55:04.390627 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 06:55:04.390653 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 06:55:04.390669 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 06:55:04.390685 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 06:55:04.390701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:55:04.390716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 06:55:04.390732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:55:04.390747 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 06:55:04.390774 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 06:55:04.390790 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 06:55:04.390805 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 06:55:04.390821 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 06:55:04.390837 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 06:55:04.390853 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 06:55:04.390868 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 06:55:04.390895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 06:55:04.390912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:55:04.390928 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 06:55:04.391026 systemd-journald[331]: Collecting audit messages is enabled. Jan 28 06:55:04.391062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:55:04.391078 kernel: audit: type=1130 audit(1769583304.284:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.391109 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 06:55:04.391125 kernel: audit: type=1130 audit(1769583304.292:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.391140 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 06:55:04.391156 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 06:55:04.391170 kernel: Bridge firewalling registered Jan 28 06:55:04.391185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 06:55:04.391201 kernel: audit: type=1130 audit(1769583304.350:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.391228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:55:04.391244 systemd-journald[331]: Journal started Jan 28 06:55:04.391271 systemd-journald[331]: Runtime Journal (/run/log/journal/778f741724164167a520c661c8098199) is 4.7M, max 37.7M, 33M free. Jan 28 06:55:04.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.348282 systemd-modules-load[333]: Inserted module 'br_netfilter' Jan 28 06:55:04.432783 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 06:55:04.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.439023 kernel: audit: type=1130 audit(1769583304.432:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.440789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:55:04.448480 kernel: audit: type=1130 audit(1769583304.440:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.447624 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 06:55:04.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.457028 kernel: audit: type=1130 audit(1769583304.449:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.456450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:55:04.464047 kernel: audit: type=1130 audit(1769583304.456:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.463135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 06:55:04.464000 audit: BPF prog-id=6 op=LOAD Jan 28 06:55:04.468961 kernel: audit: type=1334 audit(1769583304.464:9): prog-id=6 op=LOAD Jan 28 06:55:04.467819 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 06:55:04.473140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 06:55:04.478245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 06:55:04.510175 systemd-tmpfiles[355]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 06:55:04.511628 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 06:55:04.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.521064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:55:04.523364 kernel: audit: type=1130 audit(1769583304.515:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.524090 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:55:04.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.529159 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 06:55:04.560724 systemd-resolved[352]: Positive Trust Anchors: Jan 28 06:55:04.561750 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 06:55:04.561758 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 06:55:04.561805 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 06:55:04.590832 dracut-cmdline[371]: dracut-109 Jan 28 06:55:04.590832 dracut-cmdline[371]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ede6474d93f89ce5b937430958316ce45b515ef3bd53609be944197fc2bc9aa6 Jan 28 06:55:04.608980 systemd-resolved[352]: Defaulting to hostname 'linux'. Jan 28 06:55:04.611545 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 06:55:04.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.613178 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:55:04.690981 kernel: Loading iSCSI transport class v2.0-870. Jan 28 06:55:04.708980 kernel: iscsi: registered transport (tcp) Jan 28 06:55:04.739318 kernel: iscsi: registered transport (qla4xxx) Jan 28 06:55:04.739432 kernel: QLogic iSCSI HBA Driver Jan 28 06:55:04.773978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 06:55:04.796448 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:55:04.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.798521 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 06:55:04.868277 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 06:55:04.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.872036 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 06:55:04.874164 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 06:55:04.916144 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 06:55:04.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.917000 audit: BPF prog-id=7 op=LOAD Jan 28 06:55:04.917000 audit: BPF prog-id=8 op=LOAD Jan 28 06:55:04.919214 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:55:04.957807 systemd-udevd[598]: Using default interface naming scheme 'v257'. Jan 28 06:55:04.975706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:55:04.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:04.980719 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 06:55:05.015519 dracut-pre-trigger[669]: rd.md=0: removing MD RAID activation Jan 28 06:55:05.022518 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 06:55:05.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.024000 audit: BPF prog-id=9 op=LOAD Jan 28 06:55:05.027863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 06:55:05.058789 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 06:55:05.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.062846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 06:55:05.091321 systemd-networkd[716]: lo: Link UP Jan 28 06:55:05.092337 systemd-networkd[716]: lo: Gained carrier Jan 28 06:55:05.093859 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 06:55:05.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.094743 systemd[1]: Reached target network.target - Network. Jan 28 06:55:05.221476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:55:05.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.225365 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 06:55:05.383282 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 06:55:05.401719 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 06:55:05.429342 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 06:55:05.442451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 06:55:05.445495 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 06:55:05.472217 disk-uuid[770]: Primary Header is updated. Jan 28 06:55:05.472217 disk-uuid[770]: Secondary Entries is updated. Jan 28 06:55:05.472217 disk-uuid[770]: Secondary Header is updated. Jan 28 06:55:05.491616 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 06:55:05.497464 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 06:55:05.517736 kernel: AES CTR mode by8 optimization enabled Jan 28 06:55:05.582588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 06:55:05.582802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:55:05.595391 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 28 06:55:05.595427 kernel: audit: type=1131 audit(1769583305.583:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.584489 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:55:05.599091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:55:05.614071 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:55:05.614086 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 06:55:05.616754 systemd-networkd[716]: eth0: Link UP Jan 28 06:55:05.617110 systemd-networkd[716]: eth0: Gained carrier Jan 28 06:55:05.617125 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:55:05.640055 systemd-networkd[716]: eth0: DHCPv4 address 10.230.31.94/30, gateway 10.230.31.93 acquired from 10.230.31.93 Jan 28 06:55:05.757000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:55:05.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.764983 kernel: audit: type=1130 audit(1769583305.758:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.776515 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 06:55:05.785109 kernel: audit: type=1130 audit(1769583305.776:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.778577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 06:55:05.783662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:55:05.784388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 06:55:05.786504 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 06:55:05.817394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 06:55:05.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:05.824010 kernel: audit: type=1130 audit(1769583305.817:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.558679 disk-uuid[771]: Warning: The kernel is still using the old partition table. Jan 28 06:55:06.558679 disk-uuid[771]: The new table will be used at the next reboot or after you Jan 28 06:55:06.558679 disk-uuid[771]: run partprobe(8) or kpartx(8) Jan 28 06:55:06.558679 disk-uuid[771]: The operation has completed successfully. Jan 28 06:55:06.569903 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 06:55:06.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.570114 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 06:55:06.582954 kernel: audit: type=1130 audit(1769583306.570:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.583006 kernel: audit: type=1131 audit(1769583306.570:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.574135 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 06:55:06.620004 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Jan 28 06:55:06.620074 kernel: BTRFS info (device vda6): first mount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:55:06.623311 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:55:06.629488 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:55:06.629539 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:55:06.639107 kernel: BTRFS info (device vda6): last unmount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:55:06.640094 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 06:55:06.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.643143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 06:55:06.649086 kernel: audit: type=1130 audit(1769583306.639:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.863527 ignition[875]: Ignition 2.24.0 Jan 28 06:55:06.863565 ignition[875]: Stage: fetch-offline Jan 28 06:55:06.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.868438 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 06:55:06.877667 kernel: audit: type=1130 audit(1769583306.868:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.863701 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:55:06.871596 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 06:55:06.863744 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:55:06.863917 ignition[875]: parsed url from cmdline: "" Jan 28 06:55:06.863924 ignition[875]: no config URL provided Jan 28 06:55:06.864032 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 06:55:06.864058 ignition[875]: no config at "/usr/lib/ignition/user.ign" Jan 28 06:55:06.864071 ignition[875]: failed to fetch config: resource requires networking Jan 28 06:55:06.864409 ignition[875]: Ignition finished successfully Jan 28 06:55:06.908939 ignition[884]: Ignition 2.24.0 Jan 28 06:55:06.909978 ignition[884]: Stage: fetch Jan 28 06:55:06.910271 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:55:06.910291 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:55:06.910479 ignition[884]: parsed url from cmdline: "" Jan 28 06:55:06.910486 ignition[884]: no config URL provided Jan 28 06:55:06.910507 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 06:55:06.910522 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 28 06:55:06.910667 ignition[884]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 28 06:55:06.910687 ignition[884]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 28 06:55:06.910707 ignition[884]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 28 06:55:06.927606 ignition[884]: GET result: OK Jan 28 06:55:06.927783 ignition[884]: parsing config with SHA512: 632e03981b514296824576612bbd5cc0f9e8848598a21a97256c851b39e26b30705b68adbd2cf8ab2bd76a6d968762b078ee3c09895d129f7f518c56597bbd7b Jan 28 06:55:06.937213 unknown[884]: fetched base config from "system" Jan 28 06:55:06.937232 unknown[884]: fetched base config from "system" Jan 28 06:55:06.938524 ignition[884]: fetch: fetch complete Jan 28 06:55:06.937272 unknown[884]: fetched user config from "openstack" Jan 28 06:55:06.938533 ignition[884]: fetch: fetch passed Jan 28 06:55:06.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.941905 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 06:55:06.948209 kernel: audit: type=1130 audit(1769583306.941:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.938614 ignition[884]: Ignition finished successfully Jan 28 06:55:06.946130 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 06:55:06.981886 ignition[890]: Ignition 2.24.0 Jan 28 06:55:06.981914 ignition[890]: Stage: kargs Jan 28 06:55:06.982181 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:55:06.982199 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:55:06.983548 ignition[890]: kargs: kargs passed Jan 28 06:55:06.983619 ignition[890]: Ignition finished successfully Jan 28 06:55:06.994055 kernel: audit: type=1130 audit(1769583306.987:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:06.987433 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 06:55:06.990262 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 06:55:07.037872 ignition[896]: Ignition 2.24.0 Jan 28 06:55:07.037897 ignition[896]: Stage: disks Jan 28 06:55:07.038156 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:55:07.042183 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 06:55:07.038174 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:55:07.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:07.044419 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 06:55:07.039524 ignition[896]: disks: disks passed Jan 28 06:55:07.045345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 06:55:07.039594 ignition[896]: Ignition finished successfully Jan 28 06:55:07.046930 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 06:55:07.048192 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 06:55:07.049722 systemd[1]: Reached target basic.target - Basic System. Jan 28 06:55:07.053145 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 06:55:07.097049 systemd-fsck[904]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 28 06:55:07.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:07.100065 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 06:55:07.104895 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 06:55:07.163165 systemd-networkd[716]: eth0: Gained IPv6LL Jan 28 06:55:07.251320 kernel: EXT4-fs (vda9): mounted filesystem 07ff5302-22ec-4ed8-8e90-e96c5bc64457 r/w with ordered data mode. Quota mode: none. Jan 28 06:55:07.252423 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 06:55:07.253913 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 06:55:07.257413 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 06:55:07.260064 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 06:55:07.263277 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 06:55:07.264319 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 28 06:55:07.268730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 06:55:07.269865 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 06:55:07.279868 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 06:55:07.283467 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 06:55:07.297810 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (912) Jan 28 06:55:07.297842 kernel: BTRFS info (device vda6): first mount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:55:07.297870 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:55:07.303762 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:55:07.303810 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:55:07.309613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 06:55:07.378971 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:07.528928 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 06:55:07.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:07.533077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 06:55:07.535142 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 06:55:07.565973 kernel: BTRFS info (device vda6): last unmount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:55:07.589094 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 06:55:07.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:07.601254 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 06:55:07.605460 ignition[1013]: INFO : Ignition 2.24.0 Jan 28 06:55:07.605460 ignition[1013]: INFO : Stage: mount Jan 28 06:55:07.607185 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:55:07.607185 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:55:07.607185 ignition[1013]: INFO : mount: mount passed Jan 28 06:55:07.610641 ignition[1013]: INFO : Ignition finished successfully Jan 28 06:55:07.610864 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 06:55:07.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:08.409991 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:08.670967 systemd-networkd[716]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87d7:24:19ff:fee6:1f5e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87d7:24:19ff:fee6:1f5e/64 assigned by NDisc. Jan 28 06:55:08.670980 systemd-networkd[716]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 06:55:10.420975 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:14.432041 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:14.439331 coreos-metadata[914]: Jan 28 06:55:14.439 WARN failed to locate config-drive, using the metadata service API instead Jan 28 06:55:14.468546 coreos-metadata[914]: Jan 28 06:55:14.468 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 06:55:14.483848 coreos-metadata[914]: Jan 28 06:55:14.483 INFO Fetch successful Jan 28 06:55:14.485518 coreos-metadata[914]: Jan 28 06:55:14.485 INFO wrote hostname srv-gf17r.gb1.brightbox.com to /sysroot/etc/hostname Jan 28 06:55:14.487130 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 28 06:55:14.503068 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 06:55:14.503110 kernel: audit: type=1130 audit(1769583314.490:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:14.503134 kernel: audit: type=1131 audit(1769583314.490:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:14.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:14.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:14.487300 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 28 06:55:14.494118 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 06:55:14.521677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 06:55:14.555970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Jan 28 06:55:14.556056 kernel: BTRFS info (device vda6): first mount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:55:14.558309 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:55:14.564381 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:55:14.564470 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:55:14.567452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 06:55:14.607979 ignition[1047]: INFO : Ignition 2.24.0 Jan 28 06:55:14.607979 ignition[1047]: INFO : Stage: files Jan 28 06:55:14.607979 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:55:14.607979 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:55:14.612434 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 28 06:55:14.614759 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 06:55:14.614759 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 06:55:14.620495 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 06:55:14.621717 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 06:55:14.623221 unknown[1047]: wrote ssh authorized keys file for user: core Jan 28 06:55:14.624347 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 06:55:14.627871 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 06:55:14.629331 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 06:55:14.810571 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 06:55:15.136648 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 06:55:15.138236 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 06:55:15.138236 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 06:55:15.138236 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 06:55:15.138236 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 06:55:15.138236 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 06:55:15.138236 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 06:55:15.138236 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 06:55:15.147023 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 06:55:15.147023 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 06:55:15.147023 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 06:55:15.147023 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:55:15.147023 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:55:15.147023 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:55:15.147023 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 28 06:55:15.472356 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 06:55:16.665471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 06:55:16.665471 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 06:55:16.672682 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 06:55:16.674064 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 06:55:16.674064 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 06:55:16.674064 ignition[1047]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 06:55:16.674064 ignition[1047]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 06:55:16.685643 kernel: audit: type=1130 audit(1769583316.678:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.685781 ignition[1047]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 06:55:16.685781 ignition[1047]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 06:55:16.685781 ignition[1047]: INFO : files: files passed Jan 28 06:55:16.685781 ignition[1047]: INFO : Ignition finished successfully Jan 28 06:55:16.678812 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 06:55:16.682102 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 06:55:16.691000 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 06:55:16.700725 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 06:55:16.702056 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 06:55:16.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.710113 kernel: audit: type=1130 audit(1769583316.702:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.710183 kernel: audit: type=1131 audit(1769583316.702:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.732098 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:55:16.733922 initrd-setup-root-after-ignition[1078]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:55:16.735496 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:55:16.738244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 06:55:16.740367 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 06:55:16.747139 kernel: audit: type=1130 audit(1769583316.739:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.748764 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 06:55:16.801678 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 06:55:16.801877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 06:55:16.815056 kernel: audit: type=1130 audit(1769583316.802:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.815094 kernel: audit: type=1131 audit(1769583316.802:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.805013 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 06:55:16.814601 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 06:55:16.815607 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 06:55:16.817151 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 06:55:16.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.851198 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 06:55:16.864392 kernel: audit: type=1130 audit(1769583316.851:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.854392 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 06:55:16.886880 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 06:55:16.888318 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:55:16.889189 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:55:16.890989 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 06:55:16.898894 kernel: audit: type=1131 audit(1769583316.892:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.891740 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 06:55:16.891980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 06:55:16.898805 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 06:55:16.899759 systemd[1]: Stopped target basic.target - Basic System. Jan 28 06:55:16.901170 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 06:55:16.902711 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 06:55:16.904137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 06:55:16.905496 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 06:55:16.907096 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 06:55:16.908603 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 06:55:16.910317 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 06:55:16.911707 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 06:55:16.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.913315 systemd[1]: Stopped target swap.target - Swaps. Jan 28 06:55:16.914682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 06:55:16.914894 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 06:55:16.916735 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:55:16.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.917738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:55:16.918996 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 06:55:16.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.919301 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:55:16.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.920516 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 06:55:16.920769 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 06:55:16.922717 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 06:55:16.922906 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 06:55:16.924798 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 06:55:16.925090 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 06:55:16.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.928246 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 06:55:16.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.932252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 06:55:16.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.933713 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 06:55:16.934027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:55:16.937176 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 06:55:16.937433 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:55:16.939177 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 06:55:16.939358 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 06:55:16.951169 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 06:55:16.951323 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 06:55:16.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.970375 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 06:55:16.973939 ignition[1102]: INFO : Ignition 2.24.0 Jan 28 06:55:16.973939 ignition[1102]: INFO : Stage: umount Jan 28 06:55:16.973939 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:55:16.973939 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 06:55:16.973939 ignition[1102]: INFO : umount: umount passed Jan 28 06:55:16.973939 ignition[1102]: INFO : Ignition finished successfully Jan 28 06:55:16.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.976661 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 06:55:16.976856 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 06:55:16.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.980031 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 06:55:16.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.980196 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 06:55:16.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.983264 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 06:55:16.983381 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 06:55:16.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.984231 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 06:55:16.984314 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 06:55:16.985636 systemd[1]: Stopped target network.target - Network. Jan 28 06:55:16.986958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 06:55:16.987053 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 06:55:16.988585 systemd[1]: Stopped target paths.target - Path Units. Jan 28 06:55:16.989873 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 06:55:16.990281 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:55:17.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.991427 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 06:55:17.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:16.992769 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 06:55:16.994127 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 06:55:16.994208 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 06:55:16.995564 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 06:55:16.995628 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 06:55:16.997187 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 06:55:16.997241 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 06:55:16.998609 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 06:55:16.998695 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 06:55:17.000246 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 06:55:17.000321 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 06:55:17.002271 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 06:55:17.004556 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 06:55:17.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.015722 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 06:55:17.015961 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 06:55:17.020935 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 06:55:17.021169 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 06:55:17.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.023000 audit: BPF prog-id=9 op=UNLOAD Jan 28 06:55:17.024000 audit: BPF prog-id=6 op=UNLOAD Jan 28 06:55:17.025116 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 06:55:17.026745 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 06:55:17.026847 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:55:17.029686 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 06:55:17.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.036243 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 06:55:17.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.036358 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 06:55:17.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.037245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 06:55:17.037320 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:55:17.038589 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 06:55:17.038671 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 06:55:17.040233 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:55:17.056396 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 06:55:17.057339 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:55:17.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.061520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 06:55:17.061622 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 06:55:17.063389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 06:55:17.063451 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:55:17.065846 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 06:55:17.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.065977 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 06:55:17.071058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 06:55:17.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.071153 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 06:55:17.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.072457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 06:55:17.072540 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 06:55:17.081368 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 06:55:17.082259 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 06:55:17.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.082363 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:55:17.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.083957 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 06:55:17.084035 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:55:17.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.086149 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 06:55:17.086229 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 06:55:17.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.089053 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 06:55:17.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.089131 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:55:17.092564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 06:55:17.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.092653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:55:17.095826 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 06:55:17.097999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 06:55:17.105588 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 06:55:17.105760 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 06:55:17.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.115099 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 06:55:17.115283 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 06:55:17.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.116985 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 06:55:17.117984 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 06:55:17.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:17.118067 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 06:55:17.120769 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 06:55:17.142549 systemd[1]: Switching root. Jan 28 06:55:17.184622 systemd-journald[331]: Journal stopped Jan 28 06:55:18.821813 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Jan 28 06:55:18.821936 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 06:55:18.824007 kernel: SELinux: policy capability open_perms=1 Jan 28 06:55:18.824041 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 06:55:18.824064 kernel: SELinux: policy capability always_check_network=0 Jan 28 06:55:18.824092 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 06:55:18.824119 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 06:55:18.824155 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 06:55:18.824179 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 06:55:18.824199 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 06:55:18.824227 systemd[1]: Successfully loaded SELinux policy in 81.886ms. Jan 28 06:55:18.824264 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.809ms. Jan 28 06:55:18.824288 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 06:55:18.824312 systemd[1]: Detected virtualization kvm. Jan 28 06:55:18.824834 systemd[1]: Detected architecture x86-64. Jan 28 06:55:18.824924 systemd[1]: Detected first boot. Jan 28 06:55:18.824971 systemd[1]: Hostname set to . Jan 28 06:55:18.824997 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 06:55:18.825020 zram_generator::config[1146]: No configuration found. Jan 28 06:55:18.825050 kernel: Guest personality initialized and is inactive Jan 28 06:55:18.825076 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 06:55:18.825098 kernel: Initialized host personality Jan 28 06:55:18.825120 kernel: NET: Registered PF_VSOCK protocol family Jan 28 06:55:18.825142 systemd[1]: Populated /etc with preset unit settings. Jan 28 06:55:18.825165 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 06:55:18.825188 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 06:55:18.825212 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 06:55:18.825246 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 06:55:18.825273 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 06:55:18.825295 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 06:55:18.825317 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 06:55:18.825339 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 06:55:18.825362 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 06:55:18.825384 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 06:55:18.825411 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 06:55:18.825435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:55:18.825457 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:55:18.825490 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 06:55:18.825515 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 06:55:18.825544 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 06:55:18.825572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 06:55:18.825597 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 06:55:18.825619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:55:18.825642 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:55:18.825664 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 06:55:18.825687 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 06:55:18.825715 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 06:55:18.825741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 06:55:18.825763 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:55:18.825785 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 06:55:18.825808 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 06:55:18.825830 systemd[1]: Reached target slices.target - Slice Units. Jan 28 06:55:18.825852 systemd[1]: Reached target swap.target - Swaps. Jan 28 06:55:18.825904 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 06:55:18.825930 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 06:55:18.827985 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 06:55:18.828016 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 06:55:18.828040 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 06:55:18.828062 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:55:18.828084 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 06:55:18.828113 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 06:55:18.828137 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 06:55:18.828160 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:55:18.828183 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 06:55:18.828206 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 06:55:18.828229 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 06:55:18.828252 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 06:55:18.828276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:18.828303 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 06:55:18.828327 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 06:55:18.828350 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 06:55:18.828372 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 06:55:18.828396 systemd[1]: Reached target machines.target - Containers. Jan 28 06:55:18.828418 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 06:55:18.828446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:55:18.828470 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 06:55:18.828494 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 06:55:18.828516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 06:55:18.828538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 06:55:18.828561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 06:55:18.828584 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 06:55:18.828611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 06:55:18.828635 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 06:55:18.828659 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 06:55:18.828681 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 06:55:18.828705 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 06:55:18.828731 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 06:55:18.828792 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:55:18.828820 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 06:55:18.828843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 06:55:18.828877 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 06:55:18.828902 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 06:55:18.828931 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 06:55:18.828977 kernel: fuse: init (API version 7.41) Jan 28 06:55:18.829001 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 06:55:18.829025 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:18.829049 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 06:55:18.829071 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 06:55:18.829099 kernel: ACPI: bus type drm_connector registered Jan 28 06:55:18.829125 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 06:55:18.829148 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 06:55:18.829171 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 06:55:18.829193 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 06:55:18.829220 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:55:18.829247 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 06:55:18.829270 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 06:55:18.829294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 06:55:18.829317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 06:55:18.829339 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 06:55:18.829367 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 06:55:18.829402 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 06:55:18.829424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 06:55:18.829446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 06:55:18.829468 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 06:55:18.829490 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 06:55:18.829511 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 06:55:18.829598 systemd-journald[1241]: Collecting audit messages is enabled. Jan 28 06:55:18.829644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 06:55:18.829674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 06:55:18.829698 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:55:18.829721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 06:55:18.829745 systemd-journald[1241]: Journal started Jan 28 06:55:18.829783 systemd-journald[1241]: Runtime Journal (/run/log/journal/778f741724164167a520c661c8098199) is 4.7M, max 37.7M, 33M free. Jan 28 06:55:18.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.831997 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 06:55:18.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.634000 audit: BPF prog-id=14 op=UNLOAD Jan 28 06:55:18.634000 audit: BPF prog-id=13 op=UNLOAD Jan 28 06:55:18.639000 audit: BPF prog-id=15 op=LOAD Jan 28 06:55:18.640000 audit: BPF prog-id=16 op=LOAD Jan 28 06:55:18.641000 audit: BPF prog-id=17 op=LOAD Jan 28 06:55:18.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.817000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 06:55:18.817000 audit[1241]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcc6e04340 a2=4000 a3=0 items=0 ppid=1 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:18.817000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 06:55:18.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.327830 systemd[1]: Queued start job for default target multi-user.target. Jan 28 06:55:18.355177 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 06:55:18.835987 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 06:55:18.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.356315 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 06:55:18.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.853806 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 06:55:18.856272 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 06:55:18.857109 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 06:55:18.857162 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 06:55:18.859449 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 06:55:18.860473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:55:18.860655 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 06:55:18.864165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 06:55:18.867225 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 06:55:18.870092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 06:55:18.873384 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 06:55:18.874363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 06:55:18.879260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:55:18.887248 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 06:55:18.891473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 06:55:18.914118 systemd-journald[1241]: Time spent on flushing to /var/log/journal/778f741724164167a520c661c8098199 is 95.641ms for 1292 entries. Jan 28 06:55:18.914118 systemd-journald[1241]: System Journal (/var/log/journal/778f741724164167a520c661c8098199) is 8M, max 588.1M, 580.1M free. Jan 28 06:55:19.039593 systemd-journald[1241]: Received client request to flush runtime journal. Jan 28 06:55:19.039716 kernel: loop1: detected capacity change from 0 to 111560 Jan 28 06:55:19.039768 kernel: loop2: detected capacity change from 0 to 229808 Jan 28 06:55:18.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:18.916427 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 06:55:18.918358 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 06:55:18.929362 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 06:55:19.002208 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:55:19.045465 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 06:55:19.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.049036 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 06:55:19.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.069334 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 28 06:55:19.069364 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 28 06:55:19.090100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 06:55:19.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.092845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:55:19.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.100231 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 06:55:19.109001 kernel: loop3: detected capacity change from 0 to 8 Jan 28 06:55:19.131004 kernel: loop4: detected capacity change from 0 to 50784 Jan 28 06:55:19.154161 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 06:55:19.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.155000 audit: BPF prog-id=18 op=LOAD Jan 28 06:55:19.156000 audit: BPF prog-id=19 op=LOAD Jan 28 06:55:19.156000 audit: BPF prog-id=20 op=LOAD Jan 28 06:55:19.159216 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 06:55:19.161000 audit: BPF prog-id=21 op=LOAD Jan 28 06:55:19.164173 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 06:55:19.168281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 06:55:19.175982 kernel: loop5: detected capacity change from 0 to 111560 Jan 28 06:55:19.187000 audit: BPF prog-id=22 op=LOAD Jan 28 06:55:19.187000 audit: BPF prog-id=23 op=LOAD Jan 28 06:55:19.187000 audit: BPF prog-id=24 op=LOAD Jan 28 06:55:19.189615 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 06:55:19.191000 audit: BPF prog-id=25 op=LOAD Jan 28 06:55:19.192000 audit: BPF prog-id=26 op=LOAD Jan 28 06:55:19.192000 audit: BPF prog-id=27 op=LOAD Jan 28 06:55:19.195131 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 06:55:19.216991 kernel: loop6: detected capacity change from 0 to 229808 Jan 28 06:55:19.241645 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Jan 28 06:55:19.242993 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Jan 28 06:55:19.249006 kernel: loop7: detected capacity change from 0 to 8 Jan 28 06:55:19.257179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:55:19.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.266994 kernel: loop1: detected capacity change from 0 to 50784 Jan 28 06:55:19.290191 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 28 06:55:19.300962 (sd-merge)[1308]: Merged extensions into '/usr'. Jan 28 06:55:19.315553 systemd[1]: Reload requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 06:55:19.315608 systemd[1]: Reloading... Jan 28 06:55:19.351360 systemd-nsresourced[1311]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 06:55:19.511746 zram_generator::config[1351]: No configuration found. Jan 28 06:55:19.606715 systemd-oomd[1306]: No swap; memory pressure usage will be degraded Jan 28 06:55:19.622497 systemd-resolved[1307]: Positive Trust Anchors: Jan 28 06:55:19.622529 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 06:55:19.622537 systemd-resolved[1307]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 06:55:19.622584 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 06:55:19.657148 systemd-resolved[1307]: Using system hostname 'srv-gf17r.gb1.brightbox.com'. Jan 28 06:55:19.871074 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 06:55:19.872104 systemd[1]: Reloading finished in 555 ms. Jan 28 06:55:19.908507 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 06:55:19.909999 kernel: kauditd_printk_skb: 101 callbacks suppressed Jan 28 06:55:19.910139 kernel: audit: type=1130 audit(1769583319.908:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.909808 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 06:55:19.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.915932 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 06:55:19.919989 kernel: audit: type=1130 audit(1769583319.914:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.922362 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 06:55:19.926971 kernel: audit: type=1130 audit(1769583319.921:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.928719 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 06:55:19.932981 kernel: audit: type=1130 audit(1769583319.927:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.937983 kernel: audit: type=1130 audit(1769583319.932:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:19.941305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:55:19.944528 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 06:55:19.952274 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 06:55:19.961154 systemd[1]: Starting ensure-sysext.service... Jan 28 06:55:19.965456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 06:55:19.981979 kernel: audit: type=1334 audit(1769583319.973:154): prog-id=28 op=LOAD Jan 28 06:55:19.982097 kernel: audit: type=1334 audit(1769583319.976:155): prog-id=22 op=UNLOAD Jan 28 06:55:19.982151 kernel: audit: type=1334 audit(1769583319.976:156): prog-id=29 op=LOAD Jan 28 06:55:19.973000 audit: BPF prog-id=28 op=LOAD Jan 28 06:55:19.976000 audit: BPF prog-id=22 op=UNLOAD Jan 28 06:55:19.976000 audit: BPF prog-id=29 op=LOAD Jan 28 06:55:19.976000 audit: BPF prog-id=30 op=LOAD Jan 28 06:55:19.986046 kernel: audit: type=1334 audit(1769583319.976:157): prog-id=30 op=LOAD Jan 28 06:55:19.986122 kernel: audit: type=1334 audit(1769583319.976:158): prog-id=23 op=UNLOAD Jan 28 06:55:19.976000 audit: BPF prog-id=23 op=UNLOAD Jan 28 06:55:19.976000 audit: BPF prog-id=24 op=UNLOAD Jan 28 06:55:19.976000 audit: BPF prog-id=31 op=LOAD Jan 28 06:55:19.976000 audit: BPF prog-id=15 op=UNLOAD Jan 28 06:55:19.976000 audit: BPF prog-id=32 op=LOAD Jan 28 06:55:19.976000 audit: BPF prog-id=33 op=LOAD Jan 28 06:55:19.976000 audit: BPF prog-id=16 op=UNLOAD Jan 28 06:55:19.976000 audit: BPF prog-id=17 op=UNLOAD Jan 28 06:55:19.981000 audit: BPF prog-id=34 op=LOAD Jan 28 06:55:19.981000 audit: BPF prog-id=21 op=UNLOAD Jan 28 06:55:19.987000 audit: BPF prog-id=35 op=LOAD Jan 28 06:55:19.987000 audit: BPF prog-id=25 op=UNLOAD Jan 28 06:55:19.987000 audit: BPF prog-id=36 op=LOAD Jan 28 06:55:19.987000 audit: BPF prog-id=37 op=LOAD Jan 28 06:55:19.987000 audit: BPF prog-id=26 op=UNLOAD Jan 28 06:55:19.987000 audit: BPF prog-id=27 op=UNLOAD Jan 28 06:55:19.988000 audit: BPF prog-id=38 op=LOAD Jan 28 06:55:19.988000 audit: BPF prog-id=18 op=UNLOAD Jan 28 06:55:19.989000 audit: BPF prog-id=39 op=LOAD Jan 28 06:55:19.989000 audit: BPF prog-id=40 op=LOAD Jan 28 06:55:19.989000 audit: BPF prog-id=19 op=UNLOAD Jan 28 06:55:19.989000 audit: BPF prog-id=20 op=UNLOAD Jan 28 06:55:19.994141 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 06:55:19.995264 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 06:55:20.003865 systemd[1]: Reload requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Jan 28 06:55:20.004106 systemd[1]: Reloading... Jan 28 06:55:20.031478 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 06:55:20.032278 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 06:55:20.033150 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 06:55:20.035953 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 28 06:55:20.036187 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 28 06:55:20.046408 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 06:55:20.046428 systemd-tmpfiles[1413]: Skipping /boot Jan 28 06:55:20.067752 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 06:55:20.067989 systemd-tmpfiles[1413]: Skipping /boot Jan 28 06:55:20.125112 zram_generator::config[1447]: No configuration found. Jan 28 06:55:20.417343 systemd[1]: Reloading finished in 412 ms. Jan 28 06:55:20.431929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 06:55:20.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.434000 audit: BPF prog-id=41 op=LOAD Jan 28 06:55:20.434000 audit: BPF prog-id=35 op=UNLOAD Jan 28 06:55:20.434000 audit: BPF prog-id=42 op=LOAD Jan 28 06:55:20.434000 audit: BPF prog-id=43 op=LOAD Jan 28 06:55:20.434000 audit: BPF prog-id=36 op=UNLOAD Jan 28 06:55:20.434000 audit: BPF prog-id=37 op=UNLOAD Jan 28 06:55:20.435000 audit: BPF prog-id=44 op=LOAD Jan 28 06:55:20.435000 audit: BPF prog-id=28 op=UNLOAD Jan 28 06:55:20.435000 audit: BPF prog-id=45 op=LOAD Jan 28 06:55:20.435000 audit: BPF prog-id=46 op=LOAD Jan 28 06:55:20.435000 audit: BPF prog-id=29 op=UNLOAD Jan 28 06:55:20.435000 audit: BPF prog-id=30 op=UNLOAD Jan 28 06:55:20.436000 audit: BPF prog-id=47 op=LOAD Jan 28 06:55:20.436000 audit: BPF prog-id=38 op=UNLOAD Jan 28 06:55:20.437000 audit: BPF prog-id=48 op=LOAD Jan 28 06:55:20.437000 audit: BPF prog-id=49 op=LOAD Jan 28 06:55:20.437000 audit: BPF prog-id=39 op=UNLOAD Jan 28 06:55:20.437000 audit: BPF prog-id=40 op=UNLOAD Jan 28 06:55:20.440000 audit: BPF prog-id=50 op=LOAD Jan 28 06:55:20.440000 audit: BPF prog-id=31 op=UNLOAD Jan 28 06:55:20.440000 audit: BPF prog-id=51 op=LOAD Jan 28 06:55:20.440000 audit: BPF prog-id=52 op=LOAD Jan 28 06:55:20.440000 audit: BPF prog-id=32 op=UNLOAD Jan 28 06:55:20.440000 audit: BPF prog-id=33 op=UNLOAD Jan 28 06:55:20.447000 audit: BPF prog-id=53 op=LOAD Jan 28 06:55:20.447000 audit: BPF prog-id=34 op=UNLOAD Jan 28 06:55:20.452094 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:55:20.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.465572 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 06:55:20.467881 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 06:55:20.472246 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 06:55:20.478775 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 06:55:20.479000 audit: BPF prog-id=54 op=LOAD Jan 28 06:55:20.479000 audit: BPF prog-id=7 op=UNLOAD Jan 28 06:55:20.479000 audit: BPF prog-id=8 op=UNLOAD Jan 28 06:55:20.481000 audit: BPF prog-id=55 op=LOAD Jan 28 06:55:20.486362 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:55:20.490822 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 06:55:20.498438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:20.498732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:55:20.503140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 06:55:20.510098 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 06:55:20.517920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 06:55:20.519156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:55:20.519426 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 06:55:20.519574 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:55:20.519719 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:20.528123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:20.528398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:55:20.528672 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:55:20.528926 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 06:55:20.530117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:55:20.530255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:20.536681 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:20.537053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:55:20.564401 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 06:55:20.565842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:55:20.566152 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 06:55:20.566314 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:55:20.566503 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:55:20.579561 systemd-udevd[1509]: Using default interface naming scheme 'v257'. Jan 28 06:55:20.581530 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 06:55:20.581888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 06:55:20.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.592131 systemd[1]: Finished ensure-sysext.service. Jan 28 06:55:20.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.598000 audit: BPF prog-id=56 op=LOAD Jan 28 06:55:20.603279 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 06:55:20.621000 audit[1510]: SYSTEM_BOOT pid=1510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.632091 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 06:55:20.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.645431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 06:55:20.645789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 06:55:20.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.647134 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 06:55:20.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.650716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 06:55:20.651150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 06:55:20.652156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 06:55:20.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.656305 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 06:55:20.656651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 06:55:20.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.678000 audit: BPF prog-id=57 op=LOAD Jan 28 06:55:20.675637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:55:20.684002 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 06:55:20.707372 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 06:55:20.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.746756 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 06:55:20.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:20.748030 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 06:55:20.774000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 06:55:20.774000 audit[1557]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9b97c810 a2=420 a3=0 items=0 ppid=1505 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:20.774000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 06:55:20.777232 augenrules[1557]: No rules Jan 28 06:55:20.780439 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 06:55:20.780917 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 06:55:20.821356 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 06:55:20.822421 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 06:55:20.897548 systemd-networkd[1540]: lo: Link UP Jan 28 06:55:20.897565 systemd-networkd[1540]: lo: Gained carrier Jan 28 06:55:20.900765 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 06:55:20.902285 systemd[1]: Reached target network.target - Network. Jan 28 06:55:20.906069 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 06:55:20.911680 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 06:55:20.983023 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 06:55:21.011714 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 06:55:21.131767 systemd-networkd[1540]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:55:21.131797 systemd-networkd[1540]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 06:55:21.133348 systemd-networkd[1540]: eth0: Link UP Jan 28 06:55:21.133616 systemd-networkd[1540]: eth0: Gained carrier Jan 28 06:55:21.133648 systemd-networkd[1540]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:55:21.152678 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 06:55:21.157257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 06:55:21.159298 systemd-networkd[1540]: eth0: DHCPv4 address 10.230.31.94/30, gateway 10.230.31.93 acquired from 10.230.31.93 Jan 28 06:55:21.162382 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 28 06:55:21.206982 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 06:55:21.241741 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 06:55:21.273986 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 06:55:21.290980 kernel: ACPI: button: Power Button [PWRF] Jan 28 06:55:21.366757 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 06:55:21.367973 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 06:55:21.373000 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 06:55:21.373490 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 06:55:21.379858 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 06:55:21.419552 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 06:55:21.421454 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 06:55:21.422494 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 06:55:21.424305 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 06:55:21.425125 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 06:55:21.426132 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 06:55:21.428217 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 06:55:21.429058 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 06:55:21.429967 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 06:55:21.431752 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 06:55:21.432517 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 06:55:21.432565 systemd[1]: Reached target paths.target - Path Units. Jan 28 06:55:21.434148 systemd[1]: Reached target timers.target - Timer Units. Jan 28 06:55:21.436291 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 06:55:21.443581 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 06:55:21.448668 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 06:55:21.449763 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 06:55:21.451356 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 06:55:21.464119 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 06:55:21.465385 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 06:55:21.468103 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 06:55:21.471083 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 06:55:21.472191 systemd[1]: Reached target basic.target - Basic System. Jan 28 06:55:21.474106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 06:55:21.474160 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 06:55:21.476248 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 06:55:21.481262 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 06:55:21.486668 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 06:55:21.488932 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 06:55:21.495011 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 06:55:21.498262 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 06:55:21.500056 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 06:55:21.508263 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 06:55:21.514680 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 06:55:21.518222 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 06:55:21.527301 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 06:55:21.533591 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 06:55:21.547015 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:21.545120 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 06:55:21.548693 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 06:55:21.549708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 06:55:21.554247 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 06:55:21.556094 jq[1606]: false Jan 28 06:55:21.565217 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 06:55:21.580886 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 06:55:21.582627 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 06:55:21.584228 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 06:55:21.606183 oslogin_cache_refresh[1608]: Refreshing passwd entry cache Jan 28 06:55:21.607508 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Refreshing passwd entry cache Jan 28 06:55:21.629254 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Failure getting users, quitting Jan 28 06:55:21.629254 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 06:55:21.629254 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Refreshing group entry cache Jan 28 06:55:21.628450 oslogin_cache_refresh[1608]: Failure getting users, quitting Jan 28 06:55:21.628484 oslogin_cache_refresh[1608]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 06:55:21.628585 oslogin_cache_refresh[1608]: Refreshing group entry cache Jan 28 06:55:21.630713 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 06:55:21.631237 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 06:55:21.634964 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Failure getting groups, quitting Jan 28 06:55:21.634964 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 06:55:21.633105 oslogin_cache_refresh[1608]: Failure getting groups, quitting Jan 28 06:55:21.633123 oslogin_cache_refresh[1608]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 06:55:21.642073 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 06:55:21.642497 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 06:55:21.662687 extend-filesystems[1607]: Found /dev/vda6 Jan 28 06:55:21.680501 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 06:55:21.680998 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 06:55:21.686586 jq[1619]: true Jan 28 06:55:21.696494 extend-filesystems[1607]: Found /dev/vda9 Jan 28 06:55:21.730781 extend-filesystems[1607]: Checking size of /dev/vda9 Jan 28 06:55:21.762653 update_engine[1615]: I20260128 06:55:21.762415 1615 main.cc:92] Flatcar Update Engine starting Jan 28 06:55:21.767914 tar[1622]: linux-amd64/LICENSE Jan 28 06:55:21.770507 tar[1622]: linux-amd64/helm Jan 28 06:55:21.793080 extend-filesystems[1607]: Resized partition /dev/vda9 Jan 28 06:55:21.800289 jq[1644]: true Jan 28 06:55:21.802436 dbus-daemon[1604]: [system] SELinux support is enabled Jan 28 06:55:21.804557 extend-filesystems[1654]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 06:55:21.802919 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 06:55:21.807968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 06:55:21.808014 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 06:55:21.810813 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 06:55:21.810855 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 06:55:21.820891 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 28 06:55:21.827617 dbus-daemon[1604]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1540 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 28 06:55:21.830404 dbus-daemon[1604]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 06:55:21.839350 update_engine[1615]: I20260128 06:55:21.839238 1615 update_check_scheduler.cc:74] Next update check in 5m58s Jan 28 06:55:21.844318 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 28 06:55:21.847514 systemd[1]: Started update-engine.service - Update Engine. Jan 28 06:55:21.899564 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 06:55:22.064630 bash[1676]: Updated "/home/core/.ssh/authorized_keys" Jan 28 06:55:22.070618 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 06:55:22.080611 systemd[1]: Starting sshkeys.service... Jan 28 06:55:22.126372 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 28 06:55:22.151552 extend-filesystems[1654]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 06:55:22.151552 extend-filesystems[1654]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 28 06:55:22.151552 extend-filesystems[1654]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 28 06:55:22.160394 extend-filesystems[1607]: Resized filesystem in /dev/vda9 Jan 28 06:55:22.158051 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 06:55:22.159610 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 06:55:22.173046 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 06:55:22.179065 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 06:55:22.215918 containerd[1638]: time="2026-01-28T06:55:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 06:55:22.242119 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:22.248075 containerd[1638]: time="2026-01-28T06:55:22.247115768Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 06:55:22.277297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:55:22.308200 containerd[1638]: time="2026-01-28T06:55:22.308101318Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="32.832µs" Jan 28 06:55:22.308200 containerd[1638]: time="2026-01-28T06:55:22.308181955Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 06:55:22.308406 containerd[1638]: time="2026-01-28T06:55:22.308277987Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 06:55:22.308406 containerd[1638]: time="2026-01-28T06:55:22.308315473Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 06:55:22.308685 containerd[1638]: time="2026-01-28T06:55:22.308643954Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 06:55:22.308685 containerd[1638]: time="2026-01-28T06:55:22.308678130Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 06:55:22.308842 containerd[1638]: time="2026-01-28T06:55:22.308808327Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 06:55:22.308891 containerd[1638]: time="2026-01-28T06:55:22.308840366Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 06:55:22.336162 containerd[1638]: time="2026-01-28T06:55:22.335615954Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 06:55:22.336162 containerd[1638]: time="2026-01-28T06:55:22.335698081Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 06:55:22.336162 containerd[1638]: time="2026-01-28T06:55:22.335732907Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 06:55:22.336162 containerd[1638]: time="2026-01-28T06:55:22.335760684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 06:55:22.336407 containerd[1638]: time="2026-01-28T06:55:22.336238740Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 06:55:22.336407 containerd[1638]: time="2026-01-28T06:55:22.336261240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 06:55:22.336490 containerd[1638]: time="2026-01-28T06:55:22.336446071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 06:55:22.337804 containerd[1638]: time="2026-01-28T06:55:22.336838401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 06:55:22.337804 containerd[1638]: time="2026-01-28T06:55:22.336902430Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 06:55:22.337804 containerd[1638]: time="2026-01-28T06:55:22.336923875Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 06:55:22.338125 containerd[1638]: time="2026-01-28T06:55:22.338017557Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 06:55:22.340153 containerd[1638]: time="2026-01-28T06:55:22.338588194Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 06:55:22.340153 containerd[1638]: time="2026-01-28T06:55:22.340091560Z" level=info msg="metadata content store policy set" policy=shared Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351100716Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351237997Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351387343Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351414169Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351439027Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351462829Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351495643Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351517063Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351536799Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351567702Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351589020Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351607922Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351624806Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 06:55:22.355274 containerd[1638]: time="2026-01-28T06:55:22.351646438Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.351904764Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354458427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354523590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354548616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354568167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354586301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354605434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354630335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354653062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354678696Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354697581Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354759375Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354885417Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 06:55:22.355924 containerd[1638]: time="2026-01-28T06:55:22.354911541Z" level=info msg="Start snapshots syncer" Jan 28 06:55:22.359542 containerd[1638]: time="2026-01-28T06:55:22.357253310Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 06:55:22.359542 containerd[1638]: time="2026-01-28T06:55:22.358086514Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358295409Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358476909Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358716390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358773777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358800085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358826826Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358868404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358897228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 06:55:22.359913 containerd[1638]: time="2026-01-28T06:55:22.358920540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.360990214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361037532Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361098076Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361128374Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361150692Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361186231Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361206016Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361229192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361261584Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361311172Z" level=info msg="runtime interface created" Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361325025Z" level=info msg="created NRI interface" Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361340607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361372517Z" level=info msg="Connect containerd service" Jan 28 06:55:22.363982 containerd[1638]: time="2026-01-28T06:55:22.361427172Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 06:55:22.365775 containerd[1638]: time="2026-01-28T06:55:22.365705048Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 06:55:22.503780 systemd-logind[1613]: New seat seat0. Jan 28 06:55:22.507242 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 06:55:22.613690 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 28 06:55:22.624158 dbus-daemon[1604]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 28 06:55:22.626447 dbus-daemon[1604]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1659 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 28 06:55:22.660187 systemd-logind[1613]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 06:55:22.688486 systemd[1]: Starting polkit.service - Authorization Manager... Jan 28 06:55:22.806618 systemd-logind[1613]: Watching system buttons on /dev/input/event3 (Power Button) Jan 28 06:55:22.882212 containerd[1638]: time="2026-01-28T06:55:22.881999928Z" level=info msg="Start subscribing containerd event" Jan 28 06:55:22.882212 containerd[1638]: time="2026-01-28T06:55:22.882098139Z" level=info msg="Start recovering state" Jan 28 06:55:22.882505 containerd[1638]: time="2026-01-28T06:55:22.882325510Z" level=info msg="Start event monitor" Jan 28 06:55:22.882505 containerd[1638]: time="2026-01-28T06:55:22.882351357Z" level=info msg="Start cni network conf syncer for default" Jan 28 06:55:22.882505 containerd[1638]: time="2026-01-28T06:55:22.882364716Z" level=info msg="Start streaming server" Jan 28 06:55:22.882505 containerd[1638]: time="2026-01-28T06:55:22.882389495Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 06:55:22.882505 containerd[1638]: time="2026-01-28T06:55:22.882402874Z" level=info msg="runtime interface starting up..." Jan 28 06:55:22.882505 containerd[1638]: time="2026-01-28T06:55:22.882416900Z" level=info msg="starting plugins..." Jan 28 06:55:22.882505 containerd[1638]: time="2026-01-28T06:55:22.882456962Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 06:55:22.884064 containerd[1638]: time="2026-01-28T06:55:22.883461221Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 06:55:22.884064 containerd[1638]: time="2026-01-28T06:55:22.883651694Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 06:55:22.886002 containerd[1638]: time="2026-01-28T06:55:22.885044852Z" level=info msg="containerd successfully booted in 0.671067s" Jan 28 06:55:22.885554 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 06:55:22.899078 locksmithd[1660]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 06:55:23.100101 systemd-networkd[1540]: eth0: Gained IPv6LL Jan 28 06:55:23.101707 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 28 06:55:23.110066 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 06:55:23.123866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:55:23.143087 polkitd[1705]: Started polkitd version 126 Jan 28 06:55:23.167341 tar[1622]: linux-amd64/README.md Jan 28 06:55:23.169620 polkitd[1705]: Loading rules from directory /etc/polkit-1/rules.d Jan 28 06:55:23.172415 polkitd[1705]: Loading rules from directory /run/polkit-1/rules.d Jan 28 06:55:23.172506 polkitd[1705]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 06:55:23.172865 polkitd[1705]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 28 06:55:23.172905 polkitd[1705]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 06:55:23.178049 polkitd[1705]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 28 06:55:23.179541 polkitd[1705]: Finished loading, compiling and executing 2 rules Jan 28 06:55:23.180352 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 06:55:23.189751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:55:23.193023 dbus-daemon[1604]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 28 06:55:23.194158 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 06:55:23.195638 polkitd[1705]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 28 06:55:23.209062 systemd[1]: Started polkit.service - Authorization Manager. Jan 28 06:55:23.220062 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 06:55:23.268213 systemd-hostnamed[1659]: Hostname set to (static) Jan 28 06:55:23.274800 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 06:55:23.612034 sshd_keygen[1648]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 06:55:23.650429 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 06:55:23.671391 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 06:55:23.692841 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 06:55:23.693354 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 06:55:23.698513 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 06:55:23.725444 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 06:55:23.730629 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 06:55:23.735071 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 06:55:23.736369 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 06:55:24.021001 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:24.027019 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:24.340200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:55:24.360719 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:55:24.608939 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 28 06:55:24.613385 systemd-networkd[1540]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87d7:24:19ff:fee6:1f5e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87d7:24:19ff:fee6:1f5e/64 assigned by NDisc. Jan 28 06:55:24.613397 systemd-networkd[1540]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 06:55:25.057727 kubelet[1761]: E0128 06:55:25.057439 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:55:25.060123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:55:25.060405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:55:25.061450 systemd[1]: kubelet.service: Consumed 1.170s CPU time, 269.3M memory peak. Jan 28 06:55:25.980420 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 28 06:55:26.047440 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:26.047635 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:29.171495 login[1752]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:29.171496 login[1753]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:29.193201 systemd-logind[1613]: New session 2 of user core. Jan 28 06:55:29.197727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 06:55:29.200505 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 06:55:29.206616 systemd-logind[1613]: New session 1 of user core. Jan 28 06:55:29.238815 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 06:55:29.243178 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 06:55:29.266510 (systemd)[1779]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:29.271639 systemd-logind[1613]: New session 3 of user core. Jan 28 06:55:29.462897 systemd[1779]: Queued start job for default target default.target. Jan 28 06:55:29.474014 systemd[1779]: Created slice app.slice - User Application Slice. Jan 28 06:55:29.474083 systemd[1779]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 06:55:29.474110 systemd[1779]: Reached target paths.target - Paths. Jan 28 06:55:29.474211 systemd[1779]: Reached target timers.target - Timers. Jan 28 06:55:29.476551 systemd[1779]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 06:55:29.478154 systemd[1779]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 06:55:29.505285 systemd[1779]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 06:55:29.505709 systemd[1779]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 06:55:29.506014 systemd[1779]: Reached target sockets.target - Sockets. Jan 28 06:55:29.506568 systemd[1779]: Reached target basic.target - Basic System. Jan 28 06:55:29.506676 systemd[1779]: Reached target default.target - Main User Target. Jan 28 06:55:29.506748 systemd[1779]: Startup finished in 226ms. Jan 28 06:55:29.507071 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 06:55:29.519426 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 06:55:29.520922 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 06:55:30.066974 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:30.068972 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 06:55:30.078165 coreos-metadata[1685]: Jan 28 06:55:30.077 WARN failed to locate config-drive, using the metadata service API instead Jan 28 06:55:30.083366 coreos-metadata[1603]: Jan 28 06:55:30.083 WARN failed to locate config-drive, using the metadata service API instead Jan 28 06:55:30.105981 coreos-metadata[1685]: Jan 28 06:55:30.104 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 28 06:55:30.108122 coreos-metadata[1603]: Jan 28 06:55:30.107 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 28 06:55:30.113915 coreos-metadata[1603]: Jan 28 06:55:30.113 INFO Fetch failed with 404: resource not found Jan 28 06:55:30.114183 coreos-metadata[1603]: Jan 28 06:55:30.114 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 06:55:30.114619 coreos-metadata[1603]: Jan 28 06:55:30.114 INFO Fetch successful Jan 28 06:55:30.114860 coreos-metadata[1603]: Jan 28 06:55:30.114 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 28 06:55:30.126777 coreos-metadata[1603]: Jan 28 06:55:30.126 INFO Fetch successful Jan 28 06:55:30.127090 coreos-metadata[1603]: Jan 28 06:55:30.127 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 28 06:55:30.128133 coreos-metadata[1685]: Jan 28 06:55:30.128 INFO Fetch successful Jan 28 06:55:30.128285 coreos-metadata[1685]: Jan 28 06:55:30.128 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 28 06:55:30.140364 coreos-metadata[1603]: Jan 28 06:55:30.140 INFO Fetch successful Jan 28 06:55:30.140364 coreos-metadata[1603]: Jan 28 06:55:30.140 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 28 06:55:30.153161 coreos-metadata[1603]: Jan 28 06:55:30.153 INFO Fetch successful Jan 28 06:55:30.153314 coreos-metadata[1603]: Jan 28 06:55:30.153 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 28 06:55:30.155157 coreos-metadata[1685]: Jan 28 06:55:30.155 INFO Fetch successful Jan 28 06:55:30.164515 unknown[1685]: wrote ssh authorized keys file for user: core Jan 28 06:55:30.173708 coreos-metadata[1603]: Jan 28 06:55:30.171 INFO Fetch successful Jan 28 06:55:30.195723 update-ssh-keys[1816]: Updated "/home/core/.ssh/authorized_keys" Jan 28 06:55:30.200258 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 06:55:30.203734 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 06:55:30.204753 systemd[1]: Finished sshkeys.service. Jan 28 06:55:30.209375 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 06:55:30.209681 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 06:55:30.209994 systemd[1]: Startup finished in 3.505s (kernel) + 13.588s (initrd) + 12.835s (userspace) = 29.929s. Jan 28 06:55:31.548045 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 06:55:31.549991 systemd[1]: Started sshd@0-10.230.31.94:22-20.161.92.111:44974.service - OpenSSH per-connection server daemon (20.161.92.111:44974). Jan 28 06:55:32.091188 sshd[1825]: Accepted publickey for core from 20.161.92.111 port 44974 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:55:32.093283 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:32.101143 systemd-logind[1613]: New session 4 of user core. Jan 28 06:55:32.112296 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 06:55:32.458760 systemd[1]: Started sshd@1-10.230.31.94:22-20.161.92.111:44990.service - OpenSSH per-connection server daemon (20.161.92.111:44990). Jan 28 06:55:32.969394 sshd[1832]: Accepted publickey for core from 20.161.92.111 port 44990 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:55:32.971468 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:32.980102 systemd-logind[1613]: New session 5 of user core. Jan 28 06:55:32.988263 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 06:55:33.247594 sshd[1836]: Connection closed by 20.161.92.111 port 44990 Jan 28 06:55:33.248598 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Jan 28 06:55:33.254582 systemd[1]: sshd@1-10.230.31.94:22-20.161.92.111:44990.service: Deactivated successfully. Jan 28 06:55:33.257867 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 06:55:33.260315 systemd-logind[1613]: Session 5 logged out. Waiting for processes to exit. Jan 28 06:55:33.262193 systemd-logind[1613]: Removed session 5. Jan 28 06:55:33.352455 systemd[1]: Started sshd@2-10.230.31.94:22-20.161.92.111:40062.service - OpenSSH per-connection server daemon (20.161.92.111:40062). Jan 28 06:55:33.866991 sshd[1842]: Accepted publickey for core from 20.161.92.111 port 40062 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:55:33.868971 sshd-session[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:33.877263 systemd-logind[1613]: New session 6 of user core. Jan 28 06:55:33.884252 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 06:55:34.135419 sshd[1846]: Connection closed by 20.161.92.111 port 40062 Jan 28 06:55:34.136410 sshd-session[1842]: pam_unix(sshd:session): session closed for user core Jan 28 06:55:34.144283 systemd[1]: sshd@2-10.230.31.94:22-20.161.92.111:40062.service: Deactivated successfully. Jan 28 06:55:34.147357 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 06:55:34.148741 systemd-logind[1613]: Session 6 logged out. Waiting for processes to exit. Jan 28 06:55:34.151222 systemd-logind[1613]: Removed session 6. Jan 28 06:55:34.241719 systemd[1]: Started sshd@3-10.230.31.94:22-20.161.92.111:40064.service - OpenSSH per-connection server daemon (20.161.92.111:40064). Jan 28 06:55:34.758780 sshd[1852]: Accepted publickey for core from 20.161.92.111 port 40064 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:55:34.760919 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:34.768735 systemd-logind[1613]: New session 7 of user core. Jan 28 06:55:34.778352 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 06:55:35.034073 sshd[1856]: Connection closed by 20.161.92.111 port 40064 Jan 28 06:55:35.033783 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Jan 28 06:55:35.040690 systemd-logind[1613]: Session 7 logged out. Waiting for processes to exit. Jan 28 06:55:35.041205 systemd[1]: sshd@3-10.230.31.94:22-20.161.92.111:40064.service: Deactivated successfully. Jan 28 06:55:35.043747 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 06:55:35.046415 systemd-logind[1613]: Removed session 7. Jan 28 06:55:35.137364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 06:55:35.140078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:55:35.143345 systemd[1]: Started sshd@4-10.230.31.94:22-20.161.92.111:40074.service - OpenSSH per-connection server daemon (20.161.92.111:40074). Jan 28 06:55:35.360158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:55:35.382463 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:55:35.443116 kubelet[1873]: E0128 06:55:35.442983 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:55:35.448350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:55:35.448610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:55:35.449257 systemd[1]: kubelet.service: Consumed 236ms CPU time, 108.6M memory peak. Jan 28 06:55:35.646573 sshd[1863]: Accepted publickey for core from 20.161.92.111 port 40074 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:55:35.649114 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:35.658504 systemd-logind[1613]: New session 8 of user core. Jan 28 06:55:35.665206 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 06:55:35.849742 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 06:55:35.850315 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:55:35.860816 sudo[1882]: pam_unix(sudo:session): session closed for user root Jan 28 06:55:35.950170 sshd[1881]: Connection closed by 20.161.92.111 port 40074 Jan 28 06:55:35.951230 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Jan 28 06:55:35.960723 systemd[1]: sshd@4-10.230.31.94:22-20.161.92.111:40074.service: Deactivated successfully. Jan 28 06:55:35.963716 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 06:55:35.965925 systemd-logind[1613]: Session 8 logged out. Waiting for processes to exit. Jan 28 06:55:35.969130 systemd-logind[1613]: Removed session 8. Jan 28 06:55:36.059813 systemd[1]: Started sshd@5-10.230.31.94:22-20.161.92.111:40078.service - OpenSSH per-connection server daemon (20.161.92.111:40078). Jan 28 06:55:36.563080 sshd[1889]: Accepted publickey for core from 20.161.92.111 port 40078 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:55:36.565003 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:36.573769 systemd-logind[1613]: New session 9 of user core. Jan 28 06:55:36.581182 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 06:55:36.751747 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 06:55:36.752316 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:55:36.755844 sudo[1895]: pam_unix(sudo:session): session closed for user root Jan 28 06:55:36.765972 sudo[1894]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 06:55:36.766504 sudo[1894]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:55:36.778608 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 06:55:36.826000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 06:55:36.836072 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 28 06:55:36.836211 kernel: audit: type=1305 audit(1769583336.826:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 06:55:36.836268 kernel: audit: type=1300 audit(1769583336.826:229): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3782bde0 a2=420 a3=0 items=0 ppid=1900 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:36.826000 audit[1919]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3782bde0 a2=420 a3=0 items=0 ppid=1900 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:36.836648 augenrules[1919]: No rules Jan 28 06:55:36.826000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 06:55:36.842659 kernel: audit: type=1327 audit(1769583336.826:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 06:55:36.841883 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 06:55:36.842397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 06:55:36.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.844539 sudo[1894]: pam_unix(sudo:session): session closed for user root Jan 28 06:55:36.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.850212 kernel: audit: type=1130 audit(1769583336.842:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.850311 kernel: audit: type=1131 audit(1769583336.842:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.843000 audit[1894]: USER_END pid=1894 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.854311 kernel: audit: type=1106 audit(1769583336.843:232): pid=1894 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.843000 audit[1894]: CRED_DISP pid=1894 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.858430 kernel: audit: type=1104 audit(1769583336.843:233): pid=1894 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.936175 sshd[1893]: Connection closed by 20.161.92.111 port 40078 Jan 28 06:55:36.937204 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Jan 28 06:55:36.938000 audit[1889]: USER_END pid=1889 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:36.943313 systemd-logind[1613]: Session 9 logged out. Waiting for processes to exit. Jan 28 06:55:36.945974 kernel: audit: type=1106 audit(1769583336.938:234): pid=1889 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:36.946050 kernel: audit: type=1104 audit(1769583336.938:235): pid=1889 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:36.938000 audit[1889]: CRED_DISP pid=1889 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:36.945247 systemd[1]: sshd@5-10.230.31.94:22-20.161.92.111:40078.service: Deactivated successfully. Jan 28 06:55:36.948809 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 06:55:36.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.230.31.94:22-20.161.92.111:40078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.952640 kernel: audit: type=1131 audit(1769583336.941:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.230.31.94:22-20.161.92.111:40078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:36.953908 systemd-logind[1613]: Removed session 9. Jan 28 06:55:37.050442 systemd[1]: Started sshd@6-10.230.31.94:22-20.161.92.111:40080.service - OpenSSH per-connection server daemon (20.161.92.111:40080). Jan 28 06:55:37.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.31.94:22-20.161.92.111:40080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:37.551000 audit[1928]: USER_ACCT pid=1928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:37.552674 sshd[1928]: Accepted publickey for core from 20.161.92.111 port 40080 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:55:37.552000 audit[1928]: CRED_ACQ pid=1928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:37.553000 audit[1928]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff58fb2da0 a2=3 a3=0 items=0 ppid=1 pid=1928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:37.553000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:55:37.554827 sshd-session[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:55:37.563028 systemd-logind[1613]: New session 10 of user core. Jan 28 06:55:37.572324 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 06:55:37.577000 audit[1928]: USER_START pid=1928 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:37.580000 audit[1932]: CRED_ACQ pid=1932 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:55:37.739000 audit[1933]: USER_ACCT pid=1933 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:55:37.741186 sudo[1933]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 06:55:37.740000 audit[1933]: CRED_REFR pid=1933 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:55:37.740000 audit[1933]: USER_START pid=1933 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:55:37.741719 sudo[1933]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:55:38.290107 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 06:55:38.306597 (dockerd)[1952]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 06:55:38.702541 dockerd[1952]: time="2026-01-28T06:55:38.701815660Z" level=info msg="Starting up" Jan 28 06:55:38.705495 dockerd[1952]: time="2026-01-28T06:55:38.705461176Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 06:55:38.726065 dockerd[1952]: time="2026-01-28T06:55:38.725989762Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 06:55:38.784129 dockerd[1952]: time="2026-01-28T06:55:38.784059747Z" level=info msg="Loading containers: start." Jan 28 06:55:38.799223 kernel: Initializing XFRM netlink socket Jan 28 06:55:38.893000 audit[2003]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.893000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffff50e4600 a2=0 a3=0 items=0 ppid=1952 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 06:55:38.897000 audit[2005]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.897000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd8033bcd0 a2=0 a3=0 items=0 ppid=1952 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 06:55:38.900000 audit[2007]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.900000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb0781a40 a2=0 a3=0 items=0 ppid=1952 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.900000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 06:55:38.903000 audit[2009]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.903000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe22db7fe0 a2=0 a3=0 items=0 ppid=1952 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.903000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 06:55:38.906000 audit[2011]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.906000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbb430aa0 a2=0 a3=0 items=0 ppid=1952 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 06:55:38.909000 audit[2013]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.909000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc8464f10 a2=0 a3=0 items=0 ppid=1952 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.909000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:55:38.914000 audit[2015]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.914000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdfc1b6dd0 a2=0 a3=0 items=0 ppid=1952 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 06:55:38.918000 audit[2017]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.918000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd39274380 a2=0 a3=0 items=0 ppid=1952 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 06:55:38.957000 audit[2020]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.957000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe4828a1d0 a2=0 a3=0 items=0 ppid=1952 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 06:55:38.960000 audit[2022]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.960000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcafc56c80 a2=0 a3=0 items=0 ppid=1952 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 06:55:38.964000 audit[2024]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.964000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff25e4a360 a2=0 a3=0 items=0 ppid=1952 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 06:55:38.967000 audit[2026]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.967000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe74fe2130 a2=0 a3=0 items=0 ppid=1952 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:55:38.970000 audit[2028]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:38.970000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffec8e5f390 a2=0 a3=0 items=0 ppid=1952 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:38.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 06:55:39.025000 audit[2058]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.025000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffec6061fe0 a2=0 a3=0 items=0 ppid=1952 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.025000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 06:55:39.028000 audit[2060]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.028000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffd1e804c0 a2=0 a3=0 items=0 ppid=1952 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.028000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 06:55:39.031000 audit[2062]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.031000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff45fdde70 a2=0 a3=0 items=0 ppid=1952 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 06:55:39.034000 audit[2064]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.034000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd35c19e0 a2=0 a3=0 items=0 ppid=1952 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 06:55:39.037000 audit[2066]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.037000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff166dbca0 a2=0 a3=0 items=0 ppid=1952 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 06:55:39.040000 audit[2068]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.040000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe0fb65230 a2=0 a3=0 items=0 ppid=1952 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:55:39.043000 audit[2070]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.043000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc701178a0 a2=0 a3=0 items=0 ppid=1952 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 06:55:39.047000 audit[2072]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.047000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff3720c340 a2=0 a3=0 items=0 ppid=1952 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 06:55:39.051000 audit[2074]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.051000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd84160f20 a2=0 a3=0 items=0 ppid=1952 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 06:55:39.054000 audit[2076]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.054000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe49e1a2d0 a2=0 a3=0 items=0 ppid=1952 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 06:55:39.057000 audit[2078]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.057000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe9b3987f0 a2=0 a3=0 items=0 ppid=1952 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.057000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 06:55:39.060000 audit[2080]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.060000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffbdbd84e0 a2=0 a3=0 items=0 ppid=1952 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.060000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:55:39.064000 audit[2082]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.064000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe34ad5930 a2=0 a3=0 items=0 ppid=1952 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 06:55:39.072000 audit[2087]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.072000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd82792c30 a2=0 a3=0 items=0 ppid=1952 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.072000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 06:55:39.075000 audit[2089]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.075000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe0d23a510 a2=0 a3=0 items=0 ppid=1952 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.075000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 06:55:39.078000 audit[2091]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.078000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe244db480 a2=0 a3=0 items=0 ppid=1952 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.078000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 06:55:39.081000 audit[2093]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.081000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd61053c80 a2=0 a3=0 items=0 ppid=1952 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.081000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 06:55:39.084000 audit[2095]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.084000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdfe9d8d10 a2=0 a3=0 items=0 ppid=1952 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 06:55:39.088000 audit[2097]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:55:39.088000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc2325df60 a2=0 a3=0 items=0 ppid=1952 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 06:55:39.098937 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 28 06:55:39.113000 audit[2101]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.113000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff4e01ab20 a2=0 a3=0 items=0 ppid=1952 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 06:55:39.117000 audit[2103]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.117000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffff98db6c0 a2=0 a3=0 items=0 ppid=1952 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 06:55:39.132000 audit[2111]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.132000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff66479620 a2=0 a3=0 items=0 ppid=1952 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 06:55:39.147000 audit[2117]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.147000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeda36d350 a2=0 a3=0 items=0 ppid=1952 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 06:55:39.150000 audit[2119]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.150000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe3dfe9ec0 a2=0 a3=0 items=0 ppid=1952 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 06:55:39.154000 audit[2121]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.154000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef7b7ef80 a2=0 a3=0 items=0 ppid=1952 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 06:55:39.157000 audit[2123]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.157000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe1b8e3920 a2=0 a3=0 items=0 ppid=1952 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 06:55:39.160000 audit[2125]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:55:39.160000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff3e92d1c0 a2=0 a3=0 items=0 ppid=1952 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:55:39.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 06:55:39.162262 systemd-networkd[1540]: docker0: Link UP Jan 28 06:55:39.166723 dockerd[1952]: time="2026-01-28T06:55:39.166648478Z" level=info msg="Loading containers: done." Jan 28 06:55:39.192965 dockerd[1952]: time="2026-01-28T06:55:39.192882316Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 06:55:39.193183 dockerd[1952]: time="2026-01-28T06:55:39.193014726Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 06:55:39.193183 dockerd[1952]: time="2026-01-28T06:55:39.193151976Z" level=info msg="Initializing buildkit" Jan 28 06:55:39.221911 dockerd[1952]: time="2026-01-28T06:55:39.221757856Z" level=info msg="Completed buildkit initialization" Jan 28 06:55:39.234535 dockerd[1952]: time="2026-01-28T06:55:39.234426728Z" level=info msg="Daemon has completed initialization" Jan 28 06:55:39.234924 dockerd[1952]: time="2026-01-28T06:55:39.234676827Z" level=info msg="API listen on /run/docker.sock" Jan 28 06:55:39.235729 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 06:55:39.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:39.746014 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1792656212-merged.mount: Deactivated successfully. Jan 28 06:55:40.110464 systemd-timesyncd[1523]: Contacted time server [2a00:da00:f411:2900::123]:123 (2.flatcar.pool.ntp.org). Jan 28 06:55:40.110588 systemd-timesyncd[1523]: Initial clock synchronization to Wed 2026-01-28 06:55:39.929600 UTC. Jan 28 06:55:40.422414 containerd[1638]: time="2026-01-28T06:55:40.420934256Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 28 06:55:41.196196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465964028.mount: Deactivated successfully. Jan 28 06:55:43.530829 containerd[1638]: time="2026-01-28T06:55:43.530721234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:43.532572 containerd[1638]: time="2026-01-28T06:55:43.532518526Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28446035" Jan 28 06:55:43.533505 containerd[1638]: time="2026-01-28T06:55:43.533429109Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:43.538296 containerd[1638]: time="2026-01-28T06:55:43.538227224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:43.540233 containerd[1638]: time="2026-01-28T06:55:43.539639167Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.118552522s" Jan 28 06:55:43.540233 containerd[1638]: time="2026-01-28T06:55:43.539719420Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 28 06:55:43.541991 containerd[1638]: time="2026-01-28T06:55:43.541915127Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 28 06:55:45.494314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 06:55:45.500052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:55:46.000282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:55:46.009131 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 28 06:55:46.009282 kernel: audit: type=1130 audit(1769583345.999:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:45.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:46.014069 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:55:46.096962 kubelet[2235]: E0128 06:55:46.096840 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:55:46.101697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:55:46.102476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:55:46.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:55:46.103128 systemd[1]: kubelet.service: Consumed 249ms CPU time, 108.5M memory peak. Jan 28 06:55:46.107963 kernel: audit: type=1131 audit(1769583346.102:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:55:46.272080 containerd[1638]: time="2026-01-28T06:55:46.271886944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:46.275019 containerd[1638]: time="2026-01-28T06:55:46.274978050Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 28 06:55:46.276127 containerd[1638]: time="2026-01-28T06:55:46.276057708Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:46.279711 containerd[1638]: time="2026-01-28T06:55:46.279652284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:46.281527 containerd[1638]: time="2026-01-28T06:55:46.281321610Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.739339982s" Jan 28 06:55:46.281527 containerd[1638]: time="2026-01-28T06:55:46.281371953Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 28 06:55:46.283386 containerd[1638]: time="2026-01-28T06:55:46.283338996Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 28 06:55:49.413451 containerd[1638]: time="2026-01-28T06:55:49.411976307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:49.413451 containerd[1638]: time="2026-01-28T06:55:49.413382140Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 28 06:55:49.414398 containerd[1638]: time="2026-01-28T06:55:49.414357139Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:49.417764 containerd[1638]: time="2026-01-28T06:55:49.417711647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:49.419278 containerd[1638]: time="2026-01-28T06:55:49.419239706Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 3.135836915s" Jan 28 06:55:49.419440 containerd[1638]: time="2026-01-28T06:55:49.419411179Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 28 06:55:49.420198 containerd[1638]: time="2026-01-28T06:55:49.420169515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 28 06:55:51.204364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9589336.mount: Deactivated successfully. Jan 28 06:55:52.243204 containerd[1638]: time="2026-01-28T06:55:52.243127872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:52.244716 containerd[1638]: time="2026-01-28T06:55:52.244676663Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 28 06:55:52.245551 containerd[1638]: time="2026-01-28T06:55:52.245488843Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:52.248847 containerd[1638]: time="2026-01-28T06:55:52.248785679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:52.250971 containerd[1638]: time="2026-01-28T06:55:52.250791981Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.830486156s" Jan 28 06:55:52.250971 containerd[1638]: time="2026-01-28T06:55:52.250846963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 28 06:55:52.252254 containerd[1638]: time="2026-01-28T06:55:52.251937016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 28 06:55:52.886155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908817452.mount: Deactivated successfully. Jan 28 06:55:54.498970 containerd[1638]: time="2026-01-28T06:55:54.498279017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:54.500392 containerd[1638]: time="2026-01-28T06:55:54.500330956Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20380260" Jan 28 06:55:54.501637 containerd[1638]: time="2026-01-28T06:55:54.501586007Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:54.506798 containerd[1638]: time="2026-01-28T06:55:54.506732654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:55:54.508541 containerd[1638]: time="2026-01-28T06:55:54.508280275Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.256275993s" Jan 28 06:55:54.508541 containerd[1638]: time="2026-01-28T06:55:54.508334308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 28 06:55:54.509187 containerd[1638]: time="2026-01-28T06:55:54.509153928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 06:55:54.643326 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 28 06:55:54.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:54.650138 kernel: audit: type=1131 audit(1769583354.643:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:54.670000 audit: BPF prog-id=61 op=UNLOAD Jan 28 06:55:54.672964 kernel: audit: type=1334 audit(1769583354.670:290): prog-id=61 op=UNLOAD Jan 28 06:55:55.310615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660273950.mount: Deactivated successfully. Jan 28 06:55:55.318876 containerd[1638]: time="2026-01-28T06:55:55.318791786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:55:55.321168 containerd[1638]: time="2026-01-28T06:55:55.321090963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 06:55:55.321909 containerd[1638]: time="2026-01-28T06:55:55.321854338Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:55:55.325809 containerd[1638]: time="2026-01-28T06:55:55.325749780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:55:55.328089 containerd[1638]: time="2026-01-28T06:55:55.328038412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 818.842296ms" Jan 28 06:55:55.328089 containerd[1638]: time="2026-01-28T06:55:55.328085769Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 06:55:55.328839 containerd[1638]: time="2026-01-28T06:55:55.328807897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 28 06:55:55.959661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099991225.mount: Deactivated successfully. Jan 28 06:55:56.244009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 06:55:56.248209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:55:56.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:56.586399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:55:56.591962 kernel: audit: type=1130 audit(1769583356.585:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:55:56.599359 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:55:56.713825 kubelet[2351]: E0128 06:55:56.713739 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:55:56.717838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:55:56.718298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:55:56.724025 kernel: audit: type=1131 audit(1769583356.718:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:55:56.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:55:56.719188 systemd[1]: kubelet.service: Consumed 258ms CPU time, 107.8M memory peak. Jan 28 06:56:00.020090 containerd[1638]: time="2026-01-28T06:56:00.019840057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:00.022019 containerd[1638]: time="2026-01-28T06:56:00.021618294Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Jan 28 06:56:00.023208 containerd[1638]: time="2026-01-28T06:56:00.023139055Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:00.027752 containerd[1638]: time="2026-01-28T06:56:00.026930298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:00.028605 containerd[1638]: time="2026-01-28T06:56:00.028563732Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.699713902s" Jan 28 06:56:00.028694 containerd[1638]: time="2026-01-28T06:56:00.028610402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 28 06:56:05.720658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:56:05.721486 systemd[1]: kubelet.service: Consumed 258ms CPU time, 107.8M memory peak. Jan 28 06:56:05.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:05.729059 kernel: audit: type=1130 audit(1769583365.719:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:05.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:05.731332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:56:05.733969 kernel: audit: type=1131 audit(1769583365.719:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:05.785876 systemd[1]: Reload requested from client PID 2414 ('systemctl') (unit session-10.scope)... Jan 28 06:56:05.785932 systemd[1]: Reloading... Jan 28 06:56:05.994044 zram_generator::config[2462]: No configuration found. Jan 28 06:56:06.342314 systemd[1]: Reloading finished in 555 ms. Jan 28 06:56:06.386000 audit: BPF prog-id=65 op=LOAD Jan 28 06:56:06.393051 kernel: audit: type=1334 audit(1769583366.386:295): prog-id=65 op=LOAD Jan 28 06:56:06.393186 kernel: audit: type=1334 audit(1769583366.386:296): prog-id=56 op=UNLOAD Jan 28 06:56:06.386000 audit: BPF prog-id=56 op=UNLOAD Jan 28 06:56:06.395000 audit: BPF prog-id=66 op=LOAD Jan 28 06:56:06.399162 kernel: audit: type=1334 audit(1769583366.395:297): prog-id=66 op=LOAD Jan 28 06:56:06.399239 kernel: audit: type=1334 audit(1769583366.395:298): prog-id=50 op=UNLOAD Jan 28 06:56:06.395000 audit: BPF prog-id=50 op=UNLOAD Jan 28 06:56:06.400741 kernel: audit: type=1334 audit(1769583366.395:299): prog-id=67 op=LOAD Jan 28 06:56:06.395000 audit: BPF prog-id=67 op=LOAD Jan 28 06:56:06.402255 kernel: audit: type=1334 audit(1769583366.395:300): prog-id=68 op=LOAD Jan 28 06:56:06.395000 audit: BPF prog-id=68 op=LOAD Jan 28 06:56:06.395000 audit: BPF prog-id=51 op=UNLOAD Jan 28 06:56:06.403819 kernel: audit: type=1334 audit(1769583366.395:301): prog-id=51 op=UNLOAD Jan 28 06:56:06.403901 kernel: audit: type=1334 audit(1769583366.395:302): prog-id=52 op=UNLOAD Jan 28 06:56:06.395000 audit: BPF prog-id=52 op=UNLOAD Jan 28 06:56:06.396000 audit: BPF prog-id=69 op=LOAD Jan 28 06:56:06.396000 audit: BPF prog-id=64 op=UNLOAD Jan 28 06:56:06.406000 audit: BPF prog-id=70 op=LOAD Jan 28 06:56:06.406000 audit: BPF prog-id=47 op=UNLOAD Jan 28 06:56:06.406000 audit: BPF prog-id=71 op=LOAD Jan 28 06:56:06.406000 audit: BPF prog-id=72 op=LOAD Jan 28 06:56:06.406000 audit: BPF prog-id=48 op=UNLOAD Jan 28 06:56:06.406000 audit: BPF prog-id=49 op=UNLOAD Jan 28 06:56:06.407000 audit: BPF prog-id=73 op=LOAD Jan 28 06:56:06.407000 audit: BPF prog-id=74 op=LOAD Jan 28 06:56:06.407000 audit: BPF prog-id=54 op=UNLOAD Jan 28 06:56:06.407000 audit: BPF prog-id=55 op=UNLOAD Jan 28 06:56:06.407000 audit: BPF prog-id=75 op=LOAD Jan 28 06:56:06.407000 audit: BPF prog-id=44 op=UNLOAD Jan 28 06:56:06.408000 audit: BPF prog-id=76 op=LOAD Jan 28 06:56:06.408000 audit: BPF prog-id=77 op=LOAD Jan 28 06:56:06.408000 audit: BPF prog-id=45 op=UNLOAD Jan 28 06:56:06.408000 audit: BPF prog-id=46 op=UNLOAD Jan 28 06:56:06.409000 audit: BPF prog-id=78 op=LOAD Jan 28 06:56:06.409000 audit: BPF prog-id=41 op=UNLOAD Jan 28 06:56:06.409000 audit: BPF prog-id=79 op=LOAD Jan 28 06:56:06.409000 audit: BPF prog-id=80 op=LOAD Jan 28 06:56:06.409000 audit: BPF prog-id=42 op=UNLOAD Jan 28 06:56:06.409000 audit: BPF prog-id=43 op=UNLOAD Jan 28 06:56:06.412000 audit: BPF prog-id=81 op=LOAD Jan 28 06:56:06.412000 audit: BPF prog-id=58 op=UNLOAD Jan 28 06:56:06.412000 audit: BPF prog-id=82 op=LOAD Jan 28 06:56:06.412000 audit: BPF prog-id=83 op=LOAD Jan 28 06:56:06.412000 audit: BPF prog-id=59 op=UNLOAD Jan 28 06:56:06.412000 audit: BPF prog-id=60 op=UNLOAD Jan 28 06:56:06.414000 audit: BPF prog-id=84 op=LOAD Jan 28 06:56:06.414000 audit: BPF prog-id=53 op=UNLOAD Jan 28 06:56:06.416000 audit: BPF prog-id=85 op=LOAD Jan 28 06:56:06.416000 audit: BPF prog-id=57 op=UNLOAD Jan 28 06:56:06.440629 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 06:56:06.440783 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 06:56:06.441338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:56:06.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:56:06.441455 systemd[1]: kubelet.service: Consumed 170ms CPU time, 97.8M memory peak. Jan 28 06:56:06.444000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:56:06.649851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:56:06.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:06.664741 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 06:56:06.778368 kubelet[2528]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:56:06.779211 kubelet[2528]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 06:56:06.779211 kubelet[2528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:56:06.782636 kubelet[2528]: I0128 06:56:06.781278 2528 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 06:56:07.295109 update_engine[1615]: I20260128 06:56:07.293803 1615 update_attempter.cc:509] Updating boot flags... Jan 28 06:56:07.628131 kubelet[2528]: I0128 06:56:07.627869 2528 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 06:56:07.628131 kubelet[2528]: I0128 06:56:07.627956 2528 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 06:56:07.628758 kubelet[2528]: I0128 06:56:07.628424 2528 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 06:56:07.672576 kubelet[2528]: I0128 06:56:07.672420 2528 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 06:56:07.674336 kubelet[2528]: E0128 06:56:07.674261 2528 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.31.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 06:56:07.715727 kubelet[2528]: I0128 06:56:07.714637 2528 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 06:56:07.726190 kubelet[2528]: I0128 06:56:07.726135 2528 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 06:56:07.729650 kubelet[2528]: I0128 06:56:07.729526 2528 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 06:56:07.732828 kubelet[2528]: I0128 06:56:07.729601 2528 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gf17r.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 06:56:07.733134 kubelet[2528]: I0128 06:56:07.732844 2528 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 06:56:07.733134 kubelet[2528]: I0128 06:56:07.732868 2528 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 06:56:07.733987 kubelet[2528]: I0128 06:56:07.733961 2528 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:56:07.738153 kubelet[2528]: I0128 06:56:07.737813 2528 kubelet.go:480] "Attempting to sync node with API server" Jan 28 06:56:07.738153 kubelet[2528]: I0128 06:56:07.737906 2528 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 06:56:07.738153 kubelet[2528]: I0128 06:56:07.737990 2528 kubelet.go:386] "Adding apiserver pod source" Jan 28 06:56:07.738153 kubelet[2528]: I0128 06:56:07.738033 2528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 06:56:07.744498 kubelet[2528]: I0128 06:56:07.744463 2528 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 06:56:07.747019 kubelet[2528]: I0128 06:56:07.746991 2528 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 06:56:07.748748 kubelet[2528]: W0128 06:56:07.748031 2528 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 06:56:07.750971 kubelet[2528]: E0128 06:56:07.750905 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.31.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gf17r.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 06:56:07.758418 kubelet[2528]: E0128 06:56:07.758377 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.31.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 06:56:07.766960 kubelet[2528]: I0128 06:56:07.766874 2528 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 06:56:07.767124 kubelet[2528]: I0128 06:56:07.767034 2528 server.go:1289] "Started kubelet" Jan 28 06:56:07.771812 kubelet[2528]: I0128 06:56:07.771428 2528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 06:56:07.775595 kubelet[2528]: E0128 06:56:07.771734 2528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.31.94:6443/api/v1/namespaces/default/events\": dial tcp 10.230.31.94:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gf17r.gb1.brightbox.com.188ed2b2842fe9e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gf17r.gb1.brightbox.com,UID:srv-gf17r.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gf17r.gb1.brightbox.com,},FirstTimestamp:2026-01-28 06:56:07.766927849 +0000 UTC m=+1.050237313,LastTimestamp:2026-01-28 06:56:07.766927849 +0000 UTC m=+1.050237313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gf17r.gb1.brightbox.com,}" Jan 28 06:56:07.775595 kubelet[2528]: I0128 06:56:07.774852 2528 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 06:56:07.779250 kubelet[2528]: I0128 06:56:07.779226 2528 server.go:317] "Adding debug handlers to kubelet server" Jan 28 06:56:07.786906 kubelet[2528]: I0128 06:56:07.786862 2528 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 06:56:07.786906 kubelet[2528]: E0128 06:56:07.787350 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:07.790409 kubelet[2528]: I0128 06:56:07.790304 2528 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 06:56:07.790491 kubelet[2528]: I0128 06:56:07.790434 2528 reconciler.go:26] "Reconciler: start to sync state" Jan 28 06:56:07.790831 kubelet[2528]: I0128 06:56:07.790694 2528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 06:56:07.791704 kubelet[2528]: I0128 06:56:07.791586 2528 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 06:56:07.793843 kubelet[2528]: I0128 06:56:07.793780 2528 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 06:56:07.794084 kubelet[2528]: E0128 06:56:07.791693 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.31.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 06:56:07.797100 kubelet[2528]: E0128 06:56:07.791792 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gf17r.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="200ms" Jan 28 06:56:07.797320 kubelet[2528]: I0128 06:56:07.797220 2528 factory.go:223] Registration of the systemd container factory successfully Jan 28 06:56:07.798388 kubelet[2528]: I0128 06:56:07.797383 2528 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 06:56:07.801995 kubelet[2528]: I0128 06:56:07.801438 2528 factory.go:223] Registration of the containerd container factory successfully Jan 28 06:56:07.802673 kubelet[2528]: E0128 06:56:07.802644 2528 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 06:56:07.809000 audit[2560]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.809000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd66cd440 a2=0 a3=0 items=0 ppid=2528 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 06:56:07.817000 audit[2563]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.817000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2ff22b50 a2=0 a3=0 items=0 ppid=2528 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 06:56:07.821000 audit[2566]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.821000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffda9146850 a2=0 a3=0 items=0 ppid=2528 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:56:07.828000 audit[2569]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.828000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd6558a9f0 a2=0 a3=0 items=0 ppid=2528 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:56:07.833133 kubelet[2528]: I0128 06:56:07.833094 2528 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 06:56:07.833455 kubelet[2528]: I0128 06:56:07.833230 2528 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 06:56:07.833631 kubelet[2528]: I0128 06:56:07.833269 2528 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:56:07.858000 audit[2574]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.858000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc2a767f80 a2=0 a3=0 items=0 ppid=2528 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 06:56:07.860534 kubelet[2528]: I0128 06:56:07.860326 2528 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 06:56:07.861000 audit[2576]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:07.861000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdc21bafa0 a2=0 a3=0 items=0 ppid=2528 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 06:56:07.863832 kubelet[2528]: I0128 06:56:07.863152 2528 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 06:56:07.863832 kubelet[2528]: I0128 06:56:07.863229 2528 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 06:56:07.863832 kubelet[2528]: I0128 06:56:07.863279 2528 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 06:56:07.863832 kubelet[2528]: I0128 06:56:07.863302 2528 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 06:56:07.863832 kubelet[2528]: E0128 06:56:07.863387 2528 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 06:56:07.865000 audit[2575]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.865000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2fc72c30 a2=0 a3=0 items=0 ppid=2528 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 06:56:07.866000 audit[2577]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:07.866000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd523a1c70 a2=0 a3=0 items=0 ppid=2528 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 06:56:07.868000 audit[2578]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.868000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc598305d0 a2=0 a3=0 items=0 ppid=2528 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 06:56:07.868000 audit[2579]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:07.868000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc7a699a0 a2=0 a3=0 items=0 ppid=2528 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 06:56:07.870000 audit[2580]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:07.870000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd76fc74a0 a2=0 a3=0 items=0 ppid=2528 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 06:56:07.872000 audit[2581]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:07.872000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb315c0d0 a2=0 a3=0 items=0 ppid=2528 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:07.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 06:56:07.882457 kubelet[2528]: E0128 06:56:07.878889 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.31.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 06:56:07.882457 kubelet[2528]: I0128 06:56:07.879867 2528 policy_none.go:49] "None policy: Start" Jan 28 06:56:07.882457 kubelet[2528]: I0128 06:56:07.879931 2528 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 06:56:07.882457 kubelet[2528]: I0128 06:56:07.879998 2528 state_mem.go:35] "Initializing new in-memory state store" Jan 28 06:56:07.888460 kubelet[2528]: E0128 06:56:07.888413 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:07.895435 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 06:56:07.912461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 06:56:07.921695 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 06:56:07.931156 kubelet[2528]: E0128 06:56:07.931078 2528 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 06:56:07.932100 kubelet[2528]: I0128 06:56:07.932075 2528 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 06:56:07.932684 kubelet[2528]: I0128 06:56:07.932113 2528 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 06:56:07.934582 kubelet[2528]: I0128 06:56:07.934551 2528 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 06:56:07.936155 kubelet[2528]: E0128 06:56:07.935602 2528 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 06:56:07.936155 kubelet[2528]: E0128 06:56:07.935676 2528 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:07.985510 systemd[1]: Created slice kubepods-burstable-podf94515557530b452d95f42796f05d433.slice - libcontainer container kubepods-burstable-podf94515557530b452d95f42796f05d433.slice. Jan 28 06:56:07.996540 kubelet[2528]: E0128 06:56:07.996478 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:07.999573 systemd[1]: Created slice kubepods-burstable-podd2b4992ad3d519ed33697c67630efb85.slice - libcontainer container kubepods-burstable-podd2b4992ad3d519ed33697c67630efb85.slice. Jan 28 06:56:08.000152 kubelet[2528]: E0128 06:56:07.999723 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gf17r.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="400ms" Jan 28 06:56:08.003796 kubelet[2528]: E0128 06:56:08.003333 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.007780 systemd[1]: Created slice kubepods-burstable-pod1d1eb6497936edeb404e2d26bf176d5e.slice - libcontainer container kubepods-burstable-pod1d1eb6497936edeb404e2d26bf176d5e.slice. Jan 28 06:56:08.011000 kubelet[2528]: E0128 06:56:08.010972 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.035754 kubelet[2528]: I0128 06:56:08.035715 2528 kubelet_node_status.go:75] "Attempting to register node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.036717 kubelet[2528]: E0128 06:56:08.036676 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.091262 kubelet[2528]: I0128 06:56:08.091132 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.091262 kubelet[2528]: I0128 06:56:08.091219 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d1eb6497936edeb404e2d26bf176d5e-k8s-certs\") pod \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" (UID: \"1d1eb6497936edeb404e2d26bf176d5e\") " pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.091967 kubelet[2528]: I0128 06:56:08.091697 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-ca-certs\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.091967 kubelet[2528]: I0128 06:56:08.091799 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-kubeconfig\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.091967 kubelet[2528]: I0128 06:56:08.091845 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2b4992ad3d519ed33697c67630efb85-kubeconfig\") pod \"kube-scheduler-srv-gf17r.gb1.brightbox.com\" (UID: \"d2b4992ad3d519ed33697c67630efb85\") " pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.091967 kubelet[2528]: I0128 06:56:08.091887 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d1eb6497936edeb404e2d26bf176d5e-ca-certs\") pod \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" (UID: \"1d1eb6497936edeb404e2d26bf176d5e\") " pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.091967 kubelet[2528]: I0128 06:56:08.091921 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d1eb6497936edeb404e2d26bf176d5e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" (UID: \"1d1eb6497936edeb404e2d26bf176d5e\") " pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.092455 kubelet[2528]: I0128 06:56:08.092303 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-flexvolume-dir\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.092455 kubelet[2528]: I0128 06:56:08.092377 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-k8s-certs\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.240936 kubelet[2528]: I0128 06:56:08.240736 2528 kubelet_node_status.go:75] "Attempting to register node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.241627 kubelet[2528]: E0128 06:56:08.241582 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.299966 containerd[1638]: time="2026-01-28T06:56:08.299738419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gf17r.gb1.brightbox.com,Uid:f94515557530b452d95f42796f05d433,Namespace:kube-system,Attempt:0,}" Jan 28 06:56:08.305510 containerd[1638]: time="2026-01-28T06:56:08.305447583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gf17r.gb1.brightbox.com,Uid:d2b4992ad3d519ed33697c67630efb85,Namespace:kube-system,Attempt:0,}" Jan 28 06:56:08.312794 containerd[1638]: time="2026-01-28T06:56:08.312733592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gf17r.gb1.brightbox.com,Uid:1d1eb6497936edeb404e2d26bf176d5e,Namespace:kube-system,Attempt:0,}" Jan 28 06:56:08.401822 kubelet[2528]: E0128 06:56:08.401691 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gf17r.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="800ms" Jan 28 06:56:08.488792 containerd[1638]: time="2026-01-28T06:56:08.488688390Z" level=info msg="connecting to shim bb68f6360d605709734961006e6bb24d21c7fb90d1726c6dca1bb77b033f0a11" address="unix:///run/containerd/s/82df916d47617416eefa4b391f328a5b5b87562c497dc482ce340d9516cf7c5d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:56:08.489816 containerd[1638]: time="2026-01-28T06:56:08.489769874Z" level=info msg="connecting to shim 702964125ba7fcee3d6dc0a0ce10e488fe13601eecb90b53098106b206bd6ab4" address="unix:///run/containerd/s/4ea53475268f8675e9a460c749b6775dd32ae84312307ff5071929067ae7fcc3" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:56:08.497198 containerd[1638]: time="2026-01-28T06:56:08.497154379Z" level=info msg="connecting to shim 2ad4370c0ce523740fe1036a87a623ff9eed613aaecf9905d450a5b4ce656907" address="unix:///run/containerd/s/15cbfc141ed60476f473de97243d699e97d1cc6dfb5017c35a9d5408676bf0b9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:56:08.602423 kubelet[2528]: E0128 06:56:08.602121 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.31.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 06:56:08.632308 systemd[1]: Started cri-containerd-2ad4370c0ce523740fe1036a87a623ff9eed613aaecf9905d450a5b4ce656907.scope - libcontainer container 2ad4370c0ce523740fe1036a87a623ff9eed613aaecf9905d450a5b4ce656907. Jan 28 06:56:08.635420 systemd[1]: Started cri-containerd-702964125ba7fcee3d6dc0a0ce10e488fe13601eecb90b53098106b206bd6ab4.scope - libcontainer container 702964125ba7fcee3d6dc0a0ce10e488fe13601eecb90b53098106b206bd6ab4. Jan 28 06:56:08.638613 systemd[1]: Started cri-containerd-bb68f6360d605709734961006e6bb24d21c7fb90d1726c6dca1bb77b033f0a11.scope - libcontainer container bb68f6360d605709734961006e6bb24d21c7fb90d1726c6dca1bb77b033f0a11. Jan 28 06:56:08.653602 kubelet[2528]: I0128 06:56:08.653247 2528 kubelet_node_status.go:75] "Attempting to register node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.654827 kubelet[2528]: E0128 06:56:08.654772 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:08.672000 audit: BPF prog-id=86 op=LOAD Jan 28 06:56:08.673000 audit: BPF prog-id=87 op=LOAD Jan 28 06:56:08.673000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2607 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363866363336306436303537303937333439363130303665366262 Jan 28 06:56:08.673000 audit: BPF prog-id=87 op=UNLOAD Jan 28 06:56:08.673000 audit[2641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363866363336306436303537303937333439363130303665366262 Jan 28 06:56:08.674000 audit: BPF prog-id=88 op=LOAD Jan 28 06:56:08.674000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2607 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363866363336306436303537303937333439363130303665366262 Jan 28 06:56:08.675000 audit: BPF prog-id=89 op=LOAD Jan 28 06:56:08.675000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2607 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363866363336306436303537303937333439363130303665366262 Jan 28 06:56:08.675000 audit: BPF prog-id=89 op=UNLOAD Jan 28 06:56:08.675000 audit[2641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363866363336306436303537303937333439363130303665366262 Jan 28 06:56:08.676000 audit: BPF prog-id=88 op=UNLOAD Jan 28 06:56:08.676000 audit[2641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363866363336306436303537303937333439363130303665366262 Jan 28 06:56:08.677000 audit: BPF prog-id=90 op=LOAD Jan 28 06:56:08.677000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2607 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363866363336306436303537303937333439363130303665366262 Jan 28 06:56:08.678000 audit: BPF prog-id=91 op=LOAD Jan 28 06:56:08.679000 audit: BPF prog-id=92 op=LOAD Jan 28 06:56:08.679000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2611 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643433373063306365353233373430666531303336613837613632 Jan 28 06:56:08.679000 audit: BPF prog-id=92 op=UNLOAD Jan 28 06:56:08.679000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643433373063306365353233373430666531303336613837613632 Jan 28 06:56:08.680000 audit: BPF prog-id=93 op=LOAD Jan 28 06:56:08.681000 audit: BPF prog-id=94 op=LOAD Jan 28 06:56:08.681000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2611 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643433373063306365353233373430666531303336613837613632 Jan 28 06:56:08.681000 audit: BPF prog-id=95 op=LOAD Jan 28 06:56:08.681000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2611 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643433373063306365353233373430666531303336613837613632 Jan 28 06:56:08.681000 audit: BPF prog-id=95 op=UNLOAD Jan 28 06:56:08.681000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643433373063306365353233373430666531303336613837613632 Jan 28 06:56:08.681000 audit: BPF prog-id=94 op=UNLOAD Jan 28 06:56:08.681000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643433373063306365353233373430666531303336613837613632 Jan 28 06:56:08.681000 audit: BPF prog-id=96 op=LOAD Jan 28 06:56:08.681000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2611 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261643433373063306365353233373430666531303336613837613632 Jan 28 06:56:08.683000 audit: BPF prog-id=97 op=LOAD Jan 28 06:56:08.683000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2610 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323936343132356261376663656533643664633061306365313065 Jan 28 06:56:08.683000 audit: BPF prog-id=97 op=UNLOAD Jan 28 06:56:08.683000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323936343132356261376663656533643664633061306365313065 Jan 28 06:56:08.684000 audit: BPF prog-id=98 op=LOAD Jan 28 06:56:08.684000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2610 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323936343132356261376663656533643664633061306365313065 Jan 28 06:56:08.684000 audit: BPF prog-id=99 op=LOAD Jan 28 06:56:08.684000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2610 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323936343132356261376663656533643664633061306365313065 Jan 28 06:56:08.684000 audit: BPF prog-id=99 op=UNLOAD Jan 28 06:56:08.684000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323936343132356261376663656533643664633061306365313065 Jan 28 06:56:08.684000 audit: BPF prog-id=98 op=UNLOAD Jan 28 06:56:08.684000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323936343132356261376663656533643664633061306365313065 Jan 28 06:56:08.685000 audit: BPF prog-id=100 op=LOAD Jan 28 06:56:08.685000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2610 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730323936343132356261376663656533643664633061306365313065 Jan 28 06:56:08.772088 containerd[1638]: time="2026-01-28T06:56:08.770799208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gf17r.gb1.brightbox.com,Uid:1d1eb6497936edeb404e2d26bf176d5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad4370c0ce523740fe1036a87a623ff9eed613aaecf9905d450a5b4ce656907\"" Jan 28 06:56:08.786557 containerd[1638]: time="2026-01-28T06:56:08.786479655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gf17r.gb1.brightbox.com,Uid:f94515557530b452d95f42796f05d433,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb68f6360d605709734961006e6bb24d21c7fb90d1726c6dca1bb77b033f0a11\"" Jan 28 06:56:08.798617 containerd[1638]: time="2026-01-28T06:56:08.798549857Z" level=info msg="CreateContainer within sandbox \"2ad4370c0ce523740fe1036a87a623ff9eed613aaecf9905d450a5b4ce656907\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 06:56:08.800806 containerd[1638]: time="2026-01-28T06:56:08.800467229Z" level=info msg="CreateContainer within sandbox \"bb68f6360d605709734961006e6bb24d21c7fb90d1726c6dca1bb77b033f0a11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 06:56:08.814064 containerd[1638]: time="2026-01-28T06:56:08.813976121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gf17r.gb1.brightbox.com,Uid:d2b4992ad3d519ed33697c67630efb85,Namespace:kube-system,Attempt:0,} returns sandbox id \"702964125ba7fcee3d6dc0a0ce10e488fe13601eecb90b53098106b206bd6ab4\"" Jan 28 06:56:08.820598 containerd[1638]: time="2026-01-28T06:56:08.820493244Z" level=info msg="Container c4d7a5e5bd31a07f8853a18ca0eb6f41c6185eba537c30638f98f4e2d8872330: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:08.821394 containerd[1638]: time="2026-01-28T06:56:08.821360012Z" level=info msg="CreateContainer within sandbox \"702964125ba7fcee3d6dc0a0ce10e488fe13601eecb90b53098106b206bd6ab4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 06:56:08.829321 containerd[1638]: time="2026-01-28T06:56:08.829238441Z" level=info msg="CreateContainer within sandbox \"bb68f6360d605709734961006e6bb24d21c7fb90d1726c6dca1bb77b033f0a11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c4d7a5e5bd31a07f8853a18ca0eb6f41c6185eba537c30638f98f4e2d8872330\"" Jan 28 06:56:08.830634 containerd[1638]: time="2026-01-28T06:56:08.830598916Z" level=info msg="StartContainer for \"c4d7a5e5bd31a07f8853a18ca0eb6f41c6185eba537c30638f98f4e2d8872330\"" Jan 28 06:56:08.832307 containerd[1638]: time="2026-01-28T06:56:08.832248047Z" level=info msg="connecting to shim c4d7a5e5bd31a07f8853a18ca0eb6f41c6185eba537c30638f98f4e2d8872330" address="unix:///run/containerd/s/82df916d47617416eefa4b391f328a5b5b87562c497dc482ce340d9516cf7c5d" protocol=ttrpc version=3 Jan 28 06:56:08.833066 containerd[1638]: time="2026-01-28T06:56:08.833022843Z" level=info msg="Container c48f7a429d266d6a10148931af593a0d36c79e25c6f42b20da2423d6d92ac9c1: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:08.841975 containerd[1638]: time="2026-01-28T06:56:08.841801959Z" level=info msg="CreateContainer within sandbox \"2ad4370c0ce523740fe1036a87a623ff9eed613aaecf9905d450a5b4ce656907\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c48f7a429d266d6a10148931af593a0d36c79e25c6f42b20da2423d6d92ac9c1\"" Jan 28 06:56:08.843351 containerd[1638]: time="2026-01-28T06:56:08.843226350Z" level=info msg="StartContainer for \"c48f7a429d266d6a10148931af593a0d36c79e25c6f42b20da2423d6d92ac9c1\"" Jan 28 06:56:08.844321 containerd[1638]: time="2026-01-28T06:56:08.844238521Z" level=info msg="Container ce110f0c9e3b83ae45d4d4e248ab922f81defdea0a842e966159c8a949906b30: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:08.846363 containerd[1638]: time="2026-01-28T06:56:08.846329199Z" level=info msg="connecting to shim c48f7a429d266d6a10148931af593a0d36c79e25c6f42b20da2423d6d92ac9c1" address="unix:///run/containerd/s/15cbfc141ed60476f473de97243d699e97d1cc6dfb5017c35a9d5408676bf0b9" protocol=ttrpc version=3 Jan 28 06:56:08.853560 containerd[1638]: time="2026-01-28T06:56:08.853509021Z" level=info msg="CreateContainer within sandbox \"702964125ba7fcee3d6dc0a0ce10e488fe13601eecb90b53098106b206bd6ab4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce110f0c9e3b83ae45d4d4e248ab922f81defdea0a842e966159c8a949906b30\"" Jan 28 06:56:08.856657 containerd[1638]: time="2026-01-28T06:56:08.855580939Z" level=info msg="StartContainer for \"ce110f0c9e3b83ae45d4d4e248ab922f81defdea0a842e966159c8a949906b30\"" Jan 28 06:56:08.860651 containerd[1638]: time="2026-01-28T06:56:08.860607815Z" level=info msg="connecting to shim ce110f0c9e3b83ae45d4d4e248ab922f81defdea0a842e966159c8a949906b30" address="unix:///run/containerd/s/4ea53475268f8675e9a460c749b6775dd32ae84312307ff5071929067ae7fcc3" protocol=ttrpc version=3 Jan 28 06:56:08.873266 systemd[1]: Started cri-containerd-c4d7a5e5bd31a07f8853a18ca0eb6f41c6185eba537c30638f98f4e2d8872330.scope - libcontainer container c4d7a5e5bd31a07f8853a18ca0eb6f41c6185eba537c30638f98f4e2d8872330. Jan 28 06:56:08.913244 systemd[1]: Started cri-containerd-c48f7a429d266d6a10148931af593a0d36c79e25c6f42b20da2423d6d92ac9c1.scope - libcontainer container c48f7a429d266d6a10148931af593a0d36c79e25c6f42b20da2423d6d92ac9c1. Jan 28 06:56:08.916467 systemd[1]: Started cri-containerd-ce110f0c9e3b83ae45d4d4e248ab922f81defdea0a842e966159c8a949906b30.scope - libcontainer container ce110f0c9e3b83ae45d4d4e248ab922f81defdea0a842e966159c8a949906b30. Jan 28 06:56:08.920260 kubelet[2528]: E0128 06:56:08.920214 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.31.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 06:56:08.921000 audit: BPF prog-id=101 op=LOAD Jan 28 06:56:08.924000 audit: BPF prog-id=102 op=LOAD Jan 28 06:56:08.924000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2607 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334643761356535626433316130376638383533613138636130656236 Jan 28 06:56:08.924000 audit: BPF prog-id=102 op=UNLOAD Jan 28 06:56:08.924000 audit[2722]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334643761356535626433316130376638383533613138636130656236 Jan 28 06:56:08.924000 audit: BPF prog-id=103 op=LOAD Jan 28 06:56:08.924000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2607 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334643761356535626433316130376638383533613138636130656236 Jan 28 06:56:08.924000 audit: BPF prog-id=104 op=LOAD Jan 28 06:56:08.924000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2607 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334643761356535626433316130376638383533613138636130656236 Jan 28 06:56:08.924000 audit: BPF prog-id=104 op=UNLOAD Jan 28 06:56:08.924000 audit[2722]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334643761356535626433316130376638383533613138636130656236 Jan 28 06:56:08.924000 audit: BPF prog-id=103 op=UNLOAD Jan 28 06:56:08.924000 audit[2722]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2607 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334643761356535626433316130376638383533613138636130656236 Jan 28 06:56:08.924000 audit: BPF prog-id=105 op=LOAD Jan 28 06:56:08.924000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2607 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334643761356535626433316130376638383533613138636130656236 Jan 28 06:56:08.946000 audit: BPF prog-id=106 op=LOAD Jan 28 06:56:08.947000 audit: BPF prog-id=107 op=LOAD Jan 28 06:56:08.947000 audit[2729]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2611 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386637613432396432363664366131303134383933316166353933 Jan 28 06:56:08.948000 audit: BPF prog-id=107 op=UNLOAD Jan 28 06:56:08.948000 audit[2729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386637613432396432363664366131303134383933316166353933 Jan 28 06:56:08.949000 audit: BPF prog-id=108 op=LOAD Jan 28 06:56:08.949000 audit[2729]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2611 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386637613432396432363664366131303134383933316166353933 Jan 28 06:56:08.949000 audit: BPF prog-id=109 op=LOAD Jan 28 06:56:08.949000 audit[2729]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2611 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386637613432396432363664366131303134383933316166353933 Jan 28 06:56:08.950000 audit: BPF prog-id=109 op=UNLOAD Jan 28 06:56:08.950000 audit[2729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386637613432396432363664366131303134383933316166353933 Jan 28 06:56:08.950000 audit: BPF prog-id=108 op=UNLOAD Jan 28 06:56:08.950000 audit[2729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386637613432396432363664366131303134383933316166353933 Jan 28 06:56:08.950000 audit: BPF prog-id=110 op=LOAD Jan 28 06:56:08.950000 audit[2729]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2611 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386637613432396432363664366131303134383933316166353933 Jan 28 06:56:08.973000 audit: BPF prog-id=111 op=LOAD Jan 28 06:56:08.975000 audit: BPF prog-id=112 op=LOAD Jan 28 06:56:08.975000 audit[2736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2610 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365313130663063396533623833616534356434643465323438616239 Jan 28 06:56:08.975000 audit: BPF prog-id=112 op=UNLOAD Jan 28 06:56:08.975000 audit[2736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365313130663063396533623833616534356434643465323438616239 Jan 28 06:56:08.976000 audit: BPF prog-id=113 op=LOAD Jan 28 06:56:08.976000 audit[2736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2610 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365313130663063396533623833616534356434643465323438616239 Jan 28 06:56:08.976000 audit: BPF prog-id=114 op=LOAD Jan 28 06:56:08.976000 audit[2736]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2610 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365313130663063396533623833616534356434643465323438616239 Jan 28 06:56:08.976000 audit: BPF prog-id=114 op=UNLOAD Jan 28 06:56:08.976000 audit[2736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365313130663063396533623833616534356434643465323438616239 Jan 28 06:56:08.976000 audit: BPF prog-id=113 op=UNLOAD Jan 28 06:56:08.976000 audit[2736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365313130663063396533623833616534356434643465323438616239 Jan 28 06:56:08.976000 audit: BPF prog-id=115 op=LOAD Jan 28 06:56:08.976000 audit[2736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2610 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:08.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365313130663063396533623833616534356434643465323438616239 Jan 28 06:56:09.016486 kubelet[2528]: E0128 06:56:09.016437 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.31.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 06:56:09.034751 containerd[1638]: time="2026-01-28T06:56:09.034541740Z" level=info msg="StartContainer for \"c4d7a5e5bd31a07f8853a18ca0eb6f41c6185eba537c30638f98f4e2d8872330\" returns successfully" Jan 28 06:56:09.066185 containerd[1638]: time="2026-01-28T06:56:09.066031286Z" level=info msg="StartContainer for \"c48f7a429d266d6a10148931af593a0d36c79e25c6f42b20da2423d6d92ac9c1\" returns successfully" Jan 28 06:56:09.078639 containerd[1638]: time="2026-01-28T06:56:09.078571152Z" level=info msg="StartContainer for \"ce110f0c9e3b83ae45d4d4e248ab922f81defdea0a842e966159c8a949906b30\" returns successfully" Jan 28 06:56:09.203996 kubelet[2528]: E0128 06:56:09.202841 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gf17r.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.94:6443: connect: connection refused" interval="1.6s" Jan 28 06:56:09.295740 kubelet[2528]: E0128 06:56:09.295528 2528 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.31.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gf17r.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.31.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 06:56:09.459982 kubelet[2528]: I0128 06:56:09.459736 2528 kubelet_node_status.go:75] "Attempting to register node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:09.461250 kubelet[2528]: E0128 06:56:09.461152 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.94:6443/api/v1/nodes\": dial tcp 10.230.31.94:6443: connect: connection refused" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:09.903565 kubelet[2528]: E0128 06:56:09.903433 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:09.912971 kubelet[2528]: E0128 06:56:09.911151 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:09.920424 kubelet[2528]: E0128 06:56:09.920384 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:10.927317 kubelet[2528]: E0128 06:56:10.926896 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:10.929822 kubelet[2528]: E0128 06:56:10.929064 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:10.929822 kubelet[2528]: E0128 06:56:10.929654 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:11.065123 kubelet[2528]: I0128 06:56:11.065083 2528 kubelet_node_status.go:75] "Attempting to register node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:11.689306 kubelet[2528]: E0128 06:56:11.689221 2528 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:11.740133 kubelet[2528]: I0128 06:56:11.740068 2528 kubelet_node_status.go:78] "Successfully registered node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:11.740355 kubelet[2528]: E0128 06:56:11.740195 2528 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-gf17r.gb1.brightbox.com\": node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:11.798057 kubelet[2528]: E0128 06:56:11.797244 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:11.899113 kubelet[2528]: E0128 06:56:11.899027 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:11.927329 kubelet[2528]: E0128 06:56:11.927272 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:11.928678 kubelet[2528]: E0128 06:56:11.928011 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:11.928678 kubelet[2528]: E0128 06:56:11.928282 2528 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gf17r.gb1.brightbox.com\" not found" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:11.999893 kubelet[2528]: E0128 06:56:11.999803 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:12.100071 kubelet[2528]: E0128 06:56:12.100006 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gf17r.gb1.brightbox.com\" not found" Jan 28 06:56:12.191800 kubelet[2528]: I0128 06:56:12.191349 2528 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:12.200767 kubelet[2528]: E0128 06:56:12.200712 2528 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:12.200767 kubelet[2528]: I0128 06:56:12.200756 2528 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:12.206121 kubelet[2528]: E0128 06:56:12.206045 2528 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:12.206260 kubelet[2528]: I0128 06:56:12.206132 2528 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:12.208264 kubelet[2528]: E0128 06:56:12.208233 2528 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gf17r.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:12.756901 kubelet[2528]: I0128 06:56:12.756752 2528 apiserver.go:52] "Watching apiserver" Jan 28 06:56:12.791634 kubelet[2528]: I0128 06:56:12.791560 2528 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 06:56:13.705618 systemd[1]: Reload requested from client PID 2824 ('systemctl') (unit session-10.scope)... Jan 28 06:56:13.705653 systemd[1]: Reloading... Jan 28 06:56:13.918078 zram_generator::config[2872]: No configuration found. Jan 28 06:56:14.329201 systemd[1]: Reloading finished in 622 ms. Jan 28 06:56:14.379652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:56:14.398586 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 06:56:14.399088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:56:14.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:14.404853 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 28 06:56:14.405033 kernel: audit: type=1131 audit(1769583374.398:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:14.408798 systemd[1]: kubelet.service: Consumed 1.596s CPU time, 128.6M memory peak. Jan 28 06:56:14.413000 audit: BPF prog-id=116 op=LOAD Jan 28 06:56:14.414108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:56:14.416966 kernel: audit: type=1334 audit(1769583374.413:400): prog-id=116 op=LOAD Jan 28 06:56:14.421044 kernel: audit: type=1334 audit(1769583374.414:401): prog-id=78 op=UNLOAD Jan 28 06:56:14.421123 kernel: audit: type=1334 audit(1769583374.414:402): prog-id=117 op=LOAD Jan 28 06:56:14.414000 audit: BPF prog-id=78 op=UNLOAD Jan 28 06:56:14.414000 audit: BPF prog-id=117 op=LOAD Jan 28 06:56:14.414000 audit: BPF prog-id=118 op=LOAD Jan 28 06:56:14.425051 kernel: audit: type=1334 audit(1769583374.414:403): prog-id=118 op=LOAD Jan 28 06:56:14.414000 audit: BPF prog-id=79 op=UNLOAD Jan 28 06:56:14.414000 audit: BPF prog-id=80 op=UNLOAD Jan 28 06:56:14.428723 kernel: audit: type=1334 audit(1769583374.414:404): prog-id=79 op=UNLOAD Jan 28 06:56:14.428804 kernel: audit: type=1334 audit(1769583374.414:405): prog-id=80 op=UNLOAD Jan 28 06:56:14.415000 audit: BPF prog-id=119 op=LOAD Jan 28 06:56:14.432997 kernel: audit: type=1334 audit(1769583374.415:406): prog-id=119 op=LOAD Jan 28 06:56:14.415000 audit: BPF prog-id=66 op=UNLOAD Jan 28 06:56:14.415000 audit: BPF prog-id=120 op=LOAD Jan 28 06:56:14.436638 kernel: audit: type=1334 audit(1769583374.415:407): prog-id=66 op=UNLOAD Jan 28 06:56:14.436725 kernel: audit: type=1334 audit(1769583374.415:408): prog-id=120 op=LOAD Jan 28 06:56:14.415000 audit: BPF prog-id=121 op=LOAD Jan 28 06:56:14.415000 audit: BPF prog-id=67 op=UNLOAD Jan 28 06:56:14.415000 audit: BPF prog-id=68 op=UNLOAD Jan 28 06:56:14.417000 audit: BPF prog-id=122 op=LOAD Jan 28 06:56:14.417000 audit: BPF prog-id=123 op=LOAD Jan 28 06:56:14.417000 audit: BPF prog-id=73 op=UNLOAD Jan 28 06:56:14.417000 audit: BPF prog-id=74 op=UNLOAD Jan 28 06:56:14.417000 audit: BPF prog-id=124 op=LOAD Jan 28 06:56:14.417000 audit: BPF prog-id=69 op=UNLOAD Jan 28 06:56:14.419000 audit: BPF prog-id=125 op=LOAD Jan 28 06:56:14.419000 audit: BPF prog-id=84 op=UNLOAD Jan 28 06:56:14.421000 audit: BPF prog-id=126 op=LOAD Jan 28 06:56:14.422000 audit: BPF prog-id=85 op=UNLOAD Jan 28 06:56:14.422000 audit: BPF prog-id=127 op=LOAD Jan 28 06:56:14.422000 audit: BPF prog-id=75 op=UNLOAD Jan 28 06:56:14.423000 audit: BPF prog-id=128 op=LOAD Jan 28 06:56:14.423000 audit: BPF prog-id=129 op=LOAD Jan 28 06:56:14.423000 audit: BPF prog-id=76 op=UNLOAD Jan 28 06:56:14.423000 audit: BPF prog-id=77 op=UNLOAD Jan 28 06:56:14.425000 audit: BPF prog-id=130 op=LOAD Jan 28 06:56:14.425000 audit: BPF prog-id=65 op=UNLOAD Jan 28 06:56:14.426000 audit: BPF prog-id=131 op=LOAD Jan 28 06:56:14.427000 audit: BPF prog-id=70 op=UNLOAD Jan 28 06:56:14.427000 audit: BPF prog-id=132 op=LOAD Jan 28 06:56:14.427000 audit: BPF prog-id=133 op=LOAD Jan 28 06:56:14.427000 audit: BPF prog-id=71 op=UNLOAD Jan 28 06:56:14.427000 audit: BPF prog-id=72 op=UNLOAD Jan 28 06:56:14.430000 audit: BPF prog-id=134 op=LOAD Jan 28 06:56:14.430000 audit: BPF prog-id=81 op=UNLOAD Jan 28 06:56:14.430000 audit: BPF prog-id=135 op=LOAD Jan 28 06:56:14.430000 audit: BPF prog-id=136 op=LOAD Jan 28 06:56:14.430000 audit: BPF prog-id=82 op=UNLOAD Jan 28 06:56:14.430000 audit: BPF prog-id=83 op=UNLOAD Jan 28 06:56:14.721280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:56:14.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:14.738110 (kubelet)[2936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 06:56:14.851188 kubelet[2936]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:56:14.851188 kubelet[2936]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 06:56:14.851188 kubelet[2936]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:56:14.851188 kubelet[2936]: I0128 06:56:14.850385 2936 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 06:56:14.861366 kubelet[2936]: I0128 06:56:14.860589 2936 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 06:56:14.861366 kubelet[2936]: I0128 06:56:14.860621 2936 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 06:56:14.861366 kubelet[2936]: I0128 06:56:14.860997 2936 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 06:56:14.863580 kubelet[2936]: I0128 06:56:14.863154 2936 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 06:56:14.872067 kubelet[2936]: I0128 06:56:14.871420 2936 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 06:56:14.882623 kubelet[2936]: I0128 06:56:14.882292 2936 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 06:56:14.889588 kubelet[2936]: I0128 06:56:14.889426 2936 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 06:56:14.891795 kubelet[2936]: I0128 06:56:14.891256 2936 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 06:56:14.891795 kubelet[2936]: I0128 06:56:14.891312 2936 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gf17r.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 06:56:14.891795 kubelet[2936]: I0128 06:56:14.891592 2936 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 06:56:14.891795 kubelet[2936]: I0128 06:56:14.891609 2936 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 06:56:14.891795 kubelet[2936]: I0128 06:56:14.891706 2936 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:56:14.892469 kubelet[2936]: I0128 06:56:14.892449 2936 kubelet.go:480] "Attempting to sync node with API server" Jan 28 06:56:14.893037 kubelet[2936]: I0128 06:56:14.893016 2936 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 06:56:14.893196 kubelet[2936]: I0128 06:56:14.893175 2936 kubelet.go:386] "Adding apiserver pod source" Jan 28 06:56:14.893324 kubelet[2936]: I0128 06:56:14.893304 2936 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 06:56:14.913189 kubelet[2936]: I0128 06:56:14.913144 2936 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 06:56:14.917060 kubelet[2936]: I0128 06:56:14.917023 2936 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 06:56:14.926762 kubelet[2936]: I0128 06:56:14.926735 2936 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 06:56:14.926913 kubelet[2936]: I0128 06:56:14.926813 2936 server.go:1289] "Started kubelet" Jan 28 06:56:14.929404 kubelet[2936]: I0128 06:56:14.929059 2936 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 06:56:14.930187 kubelet[2936]: I0128 06:56:14.930163 2936 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 06:56:14.933871 kubelet[2936]: I0128 06:56:14.933828 2936 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 06:56:14.939634 kubelet[2936]: I0128 06:56:14.939467 2936 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 06:56:14.943260 kubelet[2936]: I0128 06:56:14.943035 2936 server.go:317] "Adding debug handlers to kubelet server" Jan 28 06:56:14.944249 kubelet[2936]: I0128 06:56:14.943964 2936 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 06:56:14.948936 kubelet[2936]: I0128 06:56:14.946937 2936 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 06:56:14.948936 kubelet[2936]: I0128 06:56:14.947139 2936 reconciler.go:26] "Reconciler: start to sync state" Jan 28 06:56:14.950645 kubelet[2936]: I0128 06:56:14.950244 2936 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 06:56:14.960584 kubelet[2936]: I0128 06:56:14.960552 2936 factory.go:223] Registration of the systemd container factory successfully Jan 28 06:56:14.969047 kubelet[2936]: I0128 06:56:14.968989 2936 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 06:56:14.970567 kubelet[2936]: I0128 06:56:14.969459 2936 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 06:56:14.973294 kubelet[2936]: I0128 06:56:14.973191 2936 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 06:56:14.973440 kubelet[2936]: I0128 06:56:14.973418 2936 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 06:56:14.973719 kubelet[2936]: I0128 06:56:14.973606 2936 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 06:56:14.974762 kubelet[2936]: I0128 06:56:14.974741 2936 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 06:56:14.976127 kubelet[2936]: E0128 06:56:14.975022 2936 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 06:56:14.990120 kubelet[2936]: E0128 06:56:14.990082 2936 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 06:56:14.991583 kubelet[2936]: I0128 06:56:14.991556 2936 factory.go:223] Registration of the containerd container factory successfully Jan 28 06:56:15.075535 kubelet[2936]: E0128 06:56:15.075478 2936 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093372 2936 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093401 2936 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093436 2936 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093664 2936 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093688 2936 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093725 2936 policy_none.go:49] "None policy: Start" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093752 2936 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093784 2936 state_mem.go:35] "Initializing new in-memory state store" Jan 28 06:56:15.093972 kubelet[2936]: I0128 06:56:15.093961 2936 state_mem.go:75] "Updated machine memory state" Jan 28 06:56:15.103421 kubelet[2936]: E0128 06:56:15.103300 2936 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 06:56:15.103639 kubelet[2936]: I0128 06:56:15.103602 2936 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 06:56:15.103724 kubelet[2936]: I0128 06:56:15.103642 2936 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 06:56:15.111044 kubelet[2936]: I0128 06:56:15.110163 2936 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 06:56:15.119808 kubelet[2936]: E0128 06:56:15.118814 2936 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 06:56:15.230918 kubelet[2936]: I0128 06:56:15.230772 2936 kubelet_node_status.go:75] "Attempting to register node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.248839 kubelet[2936]: I0128 06:56:15.248740 2936 kubelet_node_status.go:124] "Node was previously registered" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.249095 kubelet[2936]: I0128 06:56:15.248975 2936 kubelet_node_status.go:78] "Successfully registered node" node="srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.277893 kubelet[2936]: I0128 06:56:15.277815 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.278298 kubelet[2936]: I0128 06:56:15.278258 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.280564 kubelet[2936]: I0128 06:56:15.279689 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.294315 kubelet[2936]: I0128 06:56:15.294247 2936 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:56:15.296011 kubelet[2936]: I0128 06:56:15.295981 2936 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:56:15.298549 kubelet[2936]: I0128 06:56:15.297814 2936 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:56:15.354584 kubelet[2936]: I0128 06:56:15.354039 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2b4992ad3d519ed33697c67630efb85-kubeconfig\") pod \"kube-scheduler-srv-gf17r.gb1.brightbox.com\" (UID: \"d2b4992ad3d519ed33697c67630efb85\") " pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.354584 kubelet[2936]: I0128 06:56:15.354119 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d1eb6497936edeb404e2d26bf176d5e-ca-certs\") pod \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" (UID: \"1d1eb6497936edeb404e2d26bf176d5e\") " pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.354584 kubelet[2936]: I0128 06:56:15.354154 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d1eb6497936edeb404e2d26bf176d5e-k8s-certs\") pod \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" (UID: \"1d1eb6497936edeb404e2d26bf176d5e\") " pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.354584 kubelet[2936]: I0128 06:56:15.354186 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-flexvolume-dir\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.354584 kubelet[2936]: I0128 06:56:15.354221 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-k8s-certs\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.355073 kubelet[2936]: I0128 06:56:15.354259 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-kubeconfig\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.355073 kubelet[2936]: I0128 06:56:15.354290 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d1eb6497936edeb404e2d26bf176d5e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" (UID: \"1d1eb6497936edeb404e2d26bf176d5e\") " pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.355073 kubelet[2936]: I0128 06:56:15.354321 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-ca-certs\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.355073 kubelet[2936]: I0128 06:56:15.354351 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f94515557530b452d95f42796f05d433-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gf17r.gb1.brightbox.com\" (UID: \"f94515557530b452d95f42796f05d433\") " pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:15.913820 kubelet[2936]: I0128 06:56:15.913770 2936 apiserver.go:52] "Watching apiserver" Jan 28 06:56:15.947732 kubelet[2936]: I0128 06:56:15.947624 2936 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 06:56:16.045372 kubelet[2936]: I0128 06:56:16.045325 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:16.048159 kubelet[2936]: I0128 06:56:16.048132 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:16.059477 kubelet[2936]: I0128 06:56:16.059427 2936 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:56:16.063821 kubelet[2936]: E0128 06:56:16.059511 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gf17r.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:16.063821 kubelet[2936]: I0128 06:56:16.060214 2936 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 06:56:16.063821 kubelet[2936]: E0128 06:56:16.060254 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gf17r.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" Jan 28 06:56:16.103973 kubelet[2936]: I0128 06:56:16.102883 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-gf17r.gb1.brightbox.com" podStartSLOduration=1.102831374 podStartE2EDuration="1.102831374s" podCreationTimestamp="2026-01-28 06:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:56:16.089366558 +0000 UTC m=+1.326024105" watchObservedRunningTime="2026-01-28 06:56:16.102831374 +0000 UTC m=+1.339488894" Jan 28 06:56:16.131544 kubelet[2936]: I0128 06:56:16.131395 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-gf17r.gb1.brightbox.com" podStartSLOduration=1.131371557 podStartE2EDuration="1.131371557s" podCreationTimestamp="2026-01-28 06:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:56:16.103263313 +0000 UTC m=+1.339920833" watchObservedRunningTime="2026-01-28 06:56:16.131371557 +0000 UTC m=+1.368029070" Jan 28 06:56:16.131810 kubelet[2936]: I0128 06:56:16.131545 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-gf17r.gb1.brightbox.com" podStartSLOduration=1.131539137 podStartE2EDuration="1.131539137s" podCreationTimestamp="2026-01-28 06:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:56:16.131366995 +0000 UTC m=+1.368024529" watchObservedRunningTime="2026-01-28 06:56:16.131539137 +0000 UTC m=+1.368196652" Jan 28 06:56:20.525616 kubelet[2936]: I0128 06:56:20.525561 2936 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 06:56:20.526967 kubelet[2936]: I0128 06:56:20.526797 2936 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 06:56:20.527038 containerd[1638]: time="2026-01-28T06:56:20.526534563Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 06:56:21.350741 systemd[1]: Created slice kubepods-besteffort-pod1c152806_f023_4db2_9981_8af52df2f7ef.slice - libcontainer container kubepods-besteffort-pod1c152806_f023_4db2_9981_8af52df2f7ef.slice. Jan 28 06:56:21.392559 kubelet[2936]: I0128 06:56:21.392489 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c152806-f023-4db2-9981-8af52df2f7ef-kube-proxy\") pod \"kube-proxy-jf2rx\" (UID: \"1c152806-f023-4db2-9981-8af52df2f7ef\") " pod="kube-system/kube-proxy-jf2rx" Jan 28 06:56:21.392559 kubelet[2936]: I0128 06:56:21.392554 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c152806-f023-4db2-9981-8af52df2f7ef-xtables-lock\") pod \"kube-proxy-jf2rx\" (UID: \"1c152806-f023-4db2-9981-8af52df2f7ef\") " pod="kube-system/kube-proxy-jf2rx" Jan 28 06:56:21.392875 kubelet[2936]: I0128 06:56:21.392594 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz876\" (UniqueName: \"kubernetes.io/projected/1c152806-f023-4db2-9981-8af52df2f7ef-kube-api-access-qz876\") pod \"kube-proxy-jf2rx\" (UID: \"1c152806-f023-4db2-9981-8af52df2f7ef\") " pod="kube-system/kube-proxy-jf2rx" Jan 28 06:56:21.392875 kubelet[2936]: I0128 06:56:21.392635 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c152806-f023-4db2-9981-8af52df2f7ef-lib-modules\") pod \"kube-proxy-jf2rx\" (UID: \"1c152806-f023-4db2-9981-8af52df2f7ef\") " pod="kube-system/kube-proxy-jf2rx" Jan 28 06:56:21.629165 systemd[1]: Created slice kubepods-besteffort-pode46a352e_99d1_483b_8b01_b49b5149e8f1.slice - libcontainer container kubepods-besteffort-pode46a352e_99d1_483b_8b01_b49b5149e8f1.slice. Jan 28 06:56:21.662667 containerd[1638]: time="2026-01-28T06:56:21.662323781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jf2rx,Uid:1c152806-f023-4db2-9981-8af52df2f7ef,Namespace:kube-system,Attempt:0,}" Jan 28 06:56:21.691976 containerd[1638]: time="2026-01-28T06:56:21.691125011Z" level=info msg="connecting to shim 99d462ded949d8fa3e876da74b025542eef6da2b55f4f7c2d25cafe9bf6349f6" address="unix:///run/containerd/s/2a5a426d0660a2e3341f1f4ff1bab6c9f7bdb1966dbd72ebecbcc3f444d29248" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:56:21.696171 kubelet[2936]: I0128 06:56:21.696099 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qtx\" (UniqueName: \"kubernetes.io/projected/e46a352e-99d1-483b-8b01-b49b5149e8f1-kube-api-access-v9qtx\") pod \"tigera-operator-7dcd859c48-z4kts\" (UID: \"e46a352e-99d1-483b-8b01-b49b5149e8f1\") " pod="tigera-operator/tigera-operator-7dcd859c48-z4kts" Jan 28 06:56:21.696673 kubelet[2936]: I0128 06:56:21.696201 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e46a352e-99d1-483b-8b01-b49b5149e8f1-var-lib-calico\") pod \"tigera-operator-7dcd859c48-z4kts\" (UID: \"e46a352e-99d1-483b-8b01-b49b5149e8f1\") " pod="tigera-operator/tigera-operator-7dcd859c48-z4kts" Jan 28 06:56:21.743411 systemd[1]: Started cri-containerd-99d462ded949d8fa3e876da74b025542eef6da2b55f4f7c2d25cafe9bf6349f6.scope - libcontainer container 99d462ded949d8fa3e876da74b025542eef6da2b55f4f7c2d25cafe9bf6349f6. Jan 28 06:56:21.764000 audit: BPF prog-id=137 op=LOAD Jan 28 06:56:21.768871 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 06:56:21.769077 kernel: audit: type=1334 audit(1769583381.764:443): prog-id=137 op=LOAD Jan 28 06:56:21.770000 audit: BPF prog-id=138 op=LOAD Jan 28 06:56:21.770000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.774226 kernel: audit: type=1334 audit(1769583381.770:444): prog-id=138 op=LOAD Jan 28 06:56:21.774318 kernel: audit: type=1300 audit(1769583381.770:444): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.779583 kernel: audit: type=1327 audit(1769583381.770:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.783627 kernel: audit: type=1334 audit(1769583381.770:445): prog-id=138 op=UNLOAD Jan 28 06:56:21.784411 kernel: audit: type=1300 audit(1769583381.770:445): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: BPF prog-id=138 op=UNLOAD Jan 28 06:56:21.770000 audit[3007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.791593 kernel: audit: type=1327 audit(1769583381.770:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.770000 audit: BPF prog-id=139 op=LOAD Jan 28 06:56:21.797208 kernel: audit: type=1334 audit(1769583381.770:446): prog-id=139 op=LOAD Jan 28 06:56:21.770000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.803972 kernel: audit: type=1300 audit(1769583381.770:446): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.814075 kernel: audit: type=1327 audit(1769583381.770:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.770000 audit: BPF prog-id=140 op=LOAD Jan 28 06:56:21.770000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.770000 audit: BPF prog-id=140 op=UNLOAD Jan 28 06:56:21.770000 audit[3007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.770000 audit: BPF prog-id=139 op=UNLOAD Jan 28 06:56:21.770000 audit[3007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.770000 audit: BPF prog-id=141 op=LOAD Jan 28 06:56:21.770000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2995 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643436326465643934396438666133653837366461373462303235 Jan 28 06:56:21.856359 containerd[1638]: time="2026-01-28T06:56:21.856139473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jf2rx,Uid:1c152806-f023-4db2-9981-8af52df2f7ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"99d462ded949d8fa3e876da74b025542eef6da2b55f4f7c2d25cafe9bf6349f6\"" Jan 28 06:56:21.862854 containerd[1638]: time="2026-01-28T06:56:21.862800850Z" level=info msg="CreateContainer within sandbox \"99d462ded949d8fa3e876da74b025542eef6da2b55f4f7c2d25cafe9bf6349f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 06:56:21.876412 containerd[1638]: time="2026-01-28T06:56:21.876358904Z" level=info msg="Container 728462d0c538ee1886a1d9963f2b5beda6a8126c97ce2fb1df836cc598767ef5: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:21.885702 containerd[1638]: time="2026-01-28T06:56:21.885534792Z" level=info msg="CreateContainer within sandbox \"99d462ded949d8fa3e876da74b025542eef6da2b55f4f7c2d25cafe9bf6349f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"728462d0c538ee1886a1d9963f2b5beda6a8126c97ce2fb1df836cc598767ef5\"" Jan 28 06:56:21.889235 containerd[1638]: time="2026-01-28T06:56:21.889177434Z" level=info msg="StartContainer for \"728462d0c538ee1886a1d9963f2b5beda6a8126c97ce2fb1df836cc598767ef5\"" Jan 28 06:56:21.891484 containerd[1638]: time="2026-01-28T06:56:21.891448745Z" level=info msg="connecting to shim 728462d0c538ee1886a1d9963f2b5beda6a8126c97ce2fb1df836cc598767ef5" address="unix:///run/containerd/s/2a5a426d0660a2e3341f1f4ff1bab6c9f7bdb1966dbd72ebecbcc3f444d29248" protocol=ttrpc version=3 Jan 28 06:56:21.926307 systemd[1]: Started cri-containerd-728462d0c538ee1886a1d9963f2b5beda6a8126c97ce2fb1df836cc598767ef5.scope - libcontainer container 728462d0c538ee1886a1d9963f2b5beda6a8126c97ce2fb1df836cc598767ef5. Jan 28 06:56:21.935316 containerd[1638]: time="2026-01-28T06:56:21.935261602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z4kts,Uid:e46a352e-99d1-483b-8b01-b49b5149e8f1,Namespace:tigera-operator,Attempt:0,}" Jan 28 06:56:21.962805 containerd[1638]: time="2026-01-28T06:56:21.962719222Z" level=info msg="connecting to shim b14eff98c04627f92badf2618995d393e83ddb14dbf9d539bf7a67dfbe68d8db" address="unix:///run/containerd/s/a3c7ca6132d2d0a221a392c0258543e1394327c1d3fcabbbca115fb583869350" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:56:21.999000 audit: BPF prog-id=142 op=LOAD Jan 28 06:56:21.999000 audit[3035]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2995 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:21.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732383436326430633533386565313838366131643939363366326235 Jan 28 06:56:22.002000 audit: BPF prog-id=143 op=LOAD Jan 28 06:56:22.002000 audit[3035]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2995 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732383436326430633533386565313838366131643939363366326235 Jan 28 06:56:22.002000 audit: BPF prog-id=143 op=UNLOAD Jan 28 06:56:22.002000 audit[3035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2995 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732383436326430633533386565313838366131643939363366326235 Jan 28 06:56:22.002000 audit: BPF prog-id=142 op=UNLOAD Jan 28 06:56:22.002000 audit[3035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2995 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732383436326430633533386565313838366131643939363366326235 Jan 28 06:56:22.002000 audit: BPF prog-id=144 op=LOAD Jan 28 06:56:22.002000 audit[3035]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2995 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732383436326430633533386565313838366131643939363366326235 Jan 28 06:56:22.011800 systemd[1]: Started cri-containerd-b14eff98c04627f92badf2618995d393e83ddb14dbf9d539bf7a67dfbe68d8db.scope - libcontainer container b14eff98c04627f92badf2618995d393e83ddb14dbf9d539bf7a67dfbe68d8db. Jan 28 06:56:22.045090 containerd[1638]: time="2026-01-28T06:56:22.044889113Z" level=info msg="StartContainer for \"728462d0c538ee1886a1d9963f2b5beda6a8126c97ce2fb1df836cc598767ef5\" returns successfully" Jan 28 06:56:22.049000 audit: BPF prog-id=145 op=LOAD Jan 28 06:56:22.051000 audit: BPF prog-id=146 op=LOAD Jan 28 06:56:22.051000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3063 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346566663938633034363237663932626164663236313839393564 Jan 28 06:56:22.051000 audit: BPF prog-id=146 op=UNLOAD Jan 28 06:56:22.051000 audit[3074]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3063 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346566663938633034363237663932626164663236313839393564 Jan 28 06:56:22.052000 audit: BPF prog-id=147 op=LOAD Jan 28 06:56:22.052000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3063 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346566663938633034363237663932626164663236313839393564 Jan 28 06:56:22.052000 audit: BPF prog-id=148 op=LOAD Jan 28 06:56:22.052000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3063 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346566663938633034363237663932626164663236313839393564 Jan 28 06:56:22.052000 audit: BPF prog-id=148 op=UNLOAD Jan 28 06:56:22.052000 audit[3074]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3063 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346566663938633034363237663932626164663236313839393564 Jan 28 06:56:22.052000 audit: BPF prog-id=147 op=UNLOAD Jan 28 06:56:22.052000 audit[3074]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3063 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346566663938633034363237663932626164663236313839393564 Jan 28 06:56:22.052000 audit: BPF prog-id=149 op=LOAD Jan 28 06:56:22.052000 audit[3074]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3063 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346566663938633034363237663932626164663236313839393564 Jan 28 06:56:22.082732 kubelet[2936]: I0128 06:56:22.082297 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jf2rx" podStartSLOduration=1.082275533 podStartE2EDuration="1.082275533s" podCreationTimestamp="2026-01-28 06:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:56:22.080935394 +0000 UTC m=+7.317592947" watchObservedRunningTime="2026-01-28 06:56:22.082275533 +0000 UTC m=+7.318933050" Jan 28 06:56:22.148409 containerd[1638]: time="2026-01-28T06:56:22.148141323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z4kts,Uid:e46a352e-99d1-483b-8b01-b49b5149e8f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b14eff98c04627f92badf2618995d393e83ddb14dbf9d539bf7a67dfbe68d8db\"" Jan 28 06:56:22.154239 containerd[1638]: time="2026-01-28T06:56:22.154157000Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 06:56:22.516529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675735652.mount: Deactivated successfully. Jan 28 06:56:22.581000 audit[3143]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.581000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6d304ac0 a2=0 a3=7ffe6d304aac items=0 ppid=3048 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 06:56:22.584000 audit[3145]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.584000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd40e86240 a2=0 a3=7ffd40e8622c items=0 ppid=3048 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 06:56:22.585000 audit[3144]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.585000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe12f7e650 a2=0 a3=7ffe12f7e63c items=0 ppid=3048 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 06:56:22.587000 audit[3147]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.587000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd51088290 a2=0 a3=7ffd5108827c items=0 ppid=3048 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.588000 audit[3148]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.588000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce454dac0 a2=0 a3=7ffce454daac items=0 ppid=3048 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 06:56:22.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 06:56:22.591000 audit[3149]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.591000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3c3b7db0 a2=0 a3=7ffe3c3b7d9c items=0 ppid=3048 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 06:56:22.703000 audit[3152]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.703000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcc6b768d0 a2=0 a3=7ffcc6b768bc items=0 ppid=3048 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 06:56:22.709000 audit[3154]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.709000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd3f7127d0 a2=0 a3=7ffd3f7127bc items=0 ppid=3048 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 06:56:22.716000 audit[3157]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.716000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc8b076d0 a2=0 a3=7fffc8b076bc items=0 ppid=3048 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 06:56:22.719000 audit[3158]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.719000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec7af15b0 a2=0 a3=7ffec7af159c items=0 ppid=3048 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 06:56:22.725000 audit[3160]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.725000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd146b8450 a2=0 a3=7ffd146b843c items=0 ppid=3048 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 06:56:22.727000 audit[3161]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.727000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc426ee970 a2=0 a3=7ffc426ee95c items=0 ppid=3048 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 06:56:22.732000 audit[3163]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.732000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffd5829b30 a2=0 a3=7fffd5829b1c items=0 ppid=3048 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 06:56:22.738000 audit[3166]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.738000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff8da1af80 a2=0 a3=7fff8da1af6c items=0 ppid=3048 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 06:56:22.740000 audit[3167]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.740000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd4e869b0 a2=0 a3=7ffdd4e8699c items=0 ppid=3048 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 06:56:22.744000 audit[3169]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.744000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc2f4f190 a2=0 a3=7ffcc2f4f17c items=0 ppid=3048 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 06:56:22.747000 audit[3170]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.747000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc522f1d30 a2=0 a3=7ffc522f1d1c items=0 ppid=3048 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 06:56:22.753000 audit[3172]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.753000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4bcd7bf0 a2=0 a3=7ffc4bcd7bdc items=0 ppid=3048 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 06:56:22.760000 audit[3175]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.760000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce983ee00 a2=0 a3=7ffce983edec items=0 ppid=3048 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 06:56:22.767000 audit[3178]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.767000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed94f8590 a2=0 a3=7ffed94f857c items=0 ppid=3048 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 06:56:22.770000 audit[3179]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.770000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe10c408f0 a2=0 a3=7ffe10c408dc items=0 ppid=3048 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 06:56:22.781000 audit[3181]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.781000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffddd2206a0 a2=0 a3=7ffddd22068c items=0 ppid=3048 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:56:22.787000 audit[3184]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.787000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe22576070 a2=0 a3=7ffe2257605c items=0 ppid=3048 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:56:22.789000 audit[3185]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.789000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4d10a3a0 a2=0 a3=7ffe4d10a38c items=0 ppid=3048 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 06:56:22.793000 audit[3187]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:56:22.793000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff137d71c0 a2=0 a3=7fff137d71ac items=0 ppid=3048 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 06:56:22.829000 audit[3193]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:22.829000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd4e07c860 a2=0 a3=7ffd4e07c84c items=0 ppid=3048 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:22.840000 audit[3193]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:22.840000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd4e07c860 a2=0 a3=7ffd4e07c84c items=0 ppid=3048 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:22.842000 audit[3198]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.842000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffededdc760 a2=0 a3=7ffededdc74c items=0 ppid=3048 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 06:56:22.847000 audit[3200]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3200 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.847000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcb8037120 a2=0 a3=7ffcb803710c items=0 ppid=3048 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 06:56:22.856000 audit[3203]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.856000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffed04c88c0 a2=0 a3=7ffed04c88ac items=0 ppid=3048 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 06:56:22.858000 audit[3204]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.858000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe60210a90 a2=0 a3=7ffe60210a7c items=0 ppid=3048 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 06:56:22.862000 audit[3206]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.862000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe87a923b0 a2=0 a3=7ffe87a9239c items=0 ppid=3048 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 06:56:22.865000 audit[3207]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.865000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7feb56b0 a2=0 a3=7ffe7feb569c items=0 ppid=3048 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 06:56:22.869000 audit[3209]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.869000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4447bd60 a2=0 a3=7ffd4447bd4c items=0 ppid=3048 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 06:56:22.875000 audit[3212]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.875000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff96ab9ef0 a2=0 a3=7fff96ab9edc items=0 ppid=3048 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 06:56:22.878000 audit[3213]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.878000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8c5b4700 a2=0 a3=7ffc8c5b46ec items=0 ppid=3048 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 06:56:22.882000 audit[3215]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.882000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe21039430 a2=0 a3=7ffe2103941c items=0 ppid=3048 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 06:56:22.884000 audit[3216]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.884000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc34f44ec0 a2=0 a3=7ffc34f44eac items=0 ppid=3048 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 06:56:22.889000 audit[3218]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.889000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea51d2210 a2=0 a3=7ffea51d21fc items=0 ppid=3048 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 06:56:22.895000 audit[3221]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.895000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffecc70a0a0 a2=0 a3=7ffecc70a08c items=0 ppid=3048 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 06:56:22.902000 audit[3224]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.902000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0a6bbfd0 a2=0 a3=7fff0a6bbfbc items=0 ppid=3048 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 06:56:22.905000 audit[3225]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.905000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd6411ab0 a2=0 a3=7ffcd6411a9c items=0 ppid=3048 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 06:56:22.909000 audit[3227]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.909000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffee4bc8010 a2=0 a3=7ffee4bc7ffc items=0 ppid=3048 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:56:22.915000 audit[3230]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.915000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd20f624c0 a2=0 a3=7ffd20f624ac items=0 ppid=3048 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:56:22.917000 audit[3231]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.917000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaece4b60 a2=0 a3=7ffeaece4b4c items=0 ppid=3048 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 06:56:22.921000 audit[3233]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.921000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd1b530df0 a2=0 a3=7ffd1b530ddc items=0 ppid=3048 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 06:56:22.923000 audit[3234]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.923000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd98f18f60 a2=0 a3=7ffd98f18f4c items=0 ppid=3048 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 06:56:22.930000 audit[3236]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.930000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde8e33670 a2=0 a3=7ffde8e3365c items=0 ppid=3048 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:56:22.937000 audit[3239]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:56:22.937000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf6471f90 a2=0 a3=7ffcf6471f7c items=0 ppid=3048 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:56:22.943000 audit[3241]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 06:56:22.943000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffaa2e3200 a2=0 a3=7fffaa2e31ec items=0 ppid=3048 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.943000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:22.944000 audit[3241]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 06:56:22.944000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffaa2e3200 a2=0 a3=7fffaa2e31ec items=0 ppid=3048 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:22.944000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:24.131129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615060636.mount: Deactivated successfully. Jan 28 06:56:25.508807 containerd[1638]: time="2026-01-28T06:56:25.508483851Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:25.509973 containerd[1638]: time="2026-01-28T06:56:25.509820685Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 28 06:56:25.511020 containerd[1638]: time="2026-01-28T06:56:25.510982077Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:25.513895 containerd[1638]: time="2026-01-28T06:56:25.513855644Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:25.515033 containerd[1638]: time="2026-01-28T06:56:25.514994846Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.360747877s" Jan 28 06:56:25.515181 containerd[1638]: time="2026-01-28T06:56:25.515152503Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 06:56:25.521341 containerd[1638]: time="2026-01-28T06:56:25.521190701Z" level=info msg="CreateContainer within sandbox \"b14eff98c04627f92badf2618995d393e83ddb14dbf9d539bf7a67dfbe68d8db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 06:56:25.531563 containerd[1638]: time="2026-01-28T06:56:25.531529006Z" level=info msg="Container 3c339ced2ae1c5e399a53904f37138a31605f62318198ead32b2ad96baad1182: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:25.558645 containerd[1638]: time="2026-01-28T06:56:25.558549167Z" level=info msg="CreateContainer within sandbox \"b14eff98c04627f92badf2618995d393e83ddb14dbf9d539bf7a67dfbe68d8db\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c339ced2ae1c5e399a53904f37138a31605f62318198ead32b2ad96baad1182\"" Jan 28 06:56:25.560843 containerd[1638]: time="2026-01-28T06:56:25.560372274Z" level=info msg="StartContainer for \"3c339ced2ae1c5e399a53904f37138a31605f62318198ead32b2ad96baad1182\"" Jan 28 06:56:25.561936 containerd[1638]: time="2026-01-28T06:56:25.561902229Z" level=info msg="connecting to shim 3c339ced2ae1c5e399a53904f37138a31605f62318198ead32b2ad96baad1182" address="unix:///run/containerd/s/a3c7ca6132d2d0a221a392c0258543e1394327c1d3fcabbbca115fb583869350" protocol=ttrpc version=3 Jan 28 06:56:25.596182 systemd[1]: Started cri-containerd-3c339ced2ae1c5e399a53904f37138a31605f62318198ead32b2ad96baad1182.scope - libcontainer container 3c339ced2ae1c5e399a53904f37138a31605f62318198ead32b2ad96baad1182. Jan 28 06:56:25.618000 audit: BPF prog-id=150 op=LOAD Jan 28 06:56:25.619000 audit: BPF prog-id=151 op=LOAD Jan 28 06:56:25.619000 audit[3250]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3063 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363333339636564326165316335653339396135333930346633373133 Jan 28 06:56:25.619000 audit: BPF prog-id=151 op=UNLOAD Jan 28 06:56:25.619000 audit[3250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3063 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363333339636564326165316335653339396135333930346633373133 Jan 28 06:56:25.620000 audit: BPF prog-id=152 op=LOAD Jan 28 06:56:25.620000 audit[3250]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3063 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:25.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363333339636564326165316335653339396135333930346633373133 Jan 28 06:56:25.620000 audit: BPF prog-id=153 op=LOAD Jan 28 06:56:25.620000 audit[3250]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3063 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:25.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363333339636564326165316335653339396135333930346633373133 Jan 28 06:56:25.620000 audit: BPF prog-id=153 op=UNLOAD Jan 28 06:56:25.620000 audit[3250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3063 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:25.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363333339636564326165316335653339396135333930346633373133 Jan 28 06:56:25.620000 audit: BPF prog-id=152 op=UNLOAD Jan 28 06:56:25.620000 audit[3250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3063 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:25.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363333339636564326165316335653339396135333930346633373133 Jan 28 06:56:25.620000 audit: BPF prog-id=154 op=LOAD Jan 28 06:56:25.620000 audit[3250]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3063 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:25.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363333339636564326165316335653339396135333930346633373133 Jan 28 06:56:25.690413 containerd[1638]: time="2026-01-28T06:56:25.690346827Z" level=info msg="StartContainer for \"3c339ced2ae1c5e399a53904f37138a31605f62318198ead32b2ad96baad1182\" returns successfully" Jan 28 06:56:28.621833 kubelet[2936]: I0128 06:56:28.621344 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-z4kts" podStartSLOduration=4.256860622 podStartE2EDuration="7.621322645s" podCreationTimestamp="2026-01-28 06:56:21 +0000 UTC" firstStartedPulling="2026-01-28 06:56:22.151771659 +0000 UTC m=+7.388429173" lastFinishedPulling="2026-01-28 06:56:25.516233689 +0000 UTC m=+10.752891196" observedRunningTime="2026-01-28 06:56:26.096889428 +0000 UTC m=+11.333546967" watchObservedRunningTime="2026-01-28 06:56:28.621322645 +0000 UTC m=+13.857980165" Jan 28 06:56:33.370000 audit[1933]: USER_END pid=1933 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:56:33.380399 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 28 06:56:33.380500 kernel: audit: type=1106 audit(1769583393.370:523): pid=1933 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:56:33.371309 sudo[1933]: pam_unix(sudo:session): session closed for user root Jan 28 06:56:33.381000 audit[1933]: CRED_DISP pid=1933 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:56:33.393969 kernel: audit: type=1104 audit(1769583393.381:524): pid=1933 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:56:33.474563 sshd[1932]: Connection closed by 20.161.92.111 port 40080 Jan 28 06:56:33.475857 sshd-session[1928]: pam_unix(sshd:session): session closed for user core Jan 28 06:56:33.478000 audit[1928]: USER_END pid=1928 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:56:33.488308 kernel: audit: type=1106 audit(1769583393.478:525): pid=1928 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:56:33.492801 systemd[1]: sshd@6-10.230.31.94:22-20.161.92.111:40080.service: Deactivated successfully. Jan 28 06:56:33.478000 audit[1928]: CRED_DISP pid=1928 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:56:33.504000 kernel: audit: type=1104 audit(1769583393.478:526): pid=1928 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:56:33.504172 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 06:56:33.506016 systemd[1]: session-10.scope: Consumed 8.264s CPU time, 153.6M memory peak. Jan 28 06:56:33.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.31.94:22-20.161.92.111:40080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:33.512055 systemd-logind[1613]: Session 10 logged out. Waiting for processes to exit. Jan 28 06:56:33.515261 kernel: audit: type=1131 audit(1769583393.492:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.31.94:22-20.161.92.111:40080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:56:33.517404 systemd-logind[1613]: Removed session 10. Jan 28 06:56:34.202000 audit[3330]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:34.221138 kernel: audit: type=1325 audit(1769583394.202:528): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:34.221391 kernel: audit: type=1300 audit(1769583394.202:528): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc0e9a9540 a2=0 a3=7ffc0e9a952c items=0 ppid=3048 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:34.202000 audit[3330]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc0e9a9540 a2=0 a3=7ffc0e9a952c items=0 ppid=3048 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:34.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:34.226986 kernel: audit: type=1327 audit(1769583394.202:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:34.231000 audit[3330]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:34.236282 kernel: audit: type=1325 audit(1769583394.231:529): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:34.236396 kernel: audit: type=1300 audit(1769583394.231:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0e9a9540 a2=0 a3=0 items=0 ppid=3048 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:34.231000 audit[3330]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0e9a9540 a2=0 a3=0 items=0 ppid=3048 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:34.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:34.266000 audit[3332]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:34.266000 audit[3332]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff14682520 a2=0 a3=7fff1468250c items=0 ppid=3048 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:34.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:34.272000 audit[3332]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:34.272000 audit[3332]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff14682520 a2=0 a3=0 items=0 ppid=3048 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:34.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:37.849000 audit[3334]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:37.849000 audit[3334]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd3f0fe70 a2=0 a3=7fffd3f0fe5c items=0 ppid=3048 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:37.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:37.854000 audit[3334]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:37.854000 audit[3334]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd3f0fe70 a2=0 a3=0 items=0 ppid=3048 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:37.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:37.890000 audit[3336]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:37.890000 audit[3336]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff844f76a0 a2=0 a3=7fff844f768c items=0 ppid=3048 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:37.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:37.896000 audit[3336]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:37.896000 audit[3336]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff844f76a0 a2=0 a3=0 items=0 ppid=3048 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:37.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:38.915000 audit[3338]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:38.924640 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 28 06:56:38.924775 kernel: audit: type=1325 audit(1769583398.915:536): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:38.915000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc67752df0 a2=0 a3=7ffc67752ddc items=0 ppid=3048 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:38.941997 kernel: audit: type=1300 audit(1769583398.915:536): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc67752df0 a2=0 a3=7ffc67752ddc items=0 ppid=3048 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:38.915000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:38.935000 audit[3338]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:38.955562 kernel: audit: type=1327 audit(1769583398.915:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:38.955659 kernel: audit: type=1325 audit(1769583398.935:537): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:38.935000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc67752df0 a2=0 a3=0 items=0 ppid=3048 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:38.959361 kernel: audit: type=1300 audit(1769583398.935:537): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc67752df0 a2=0 a3=0 items=0 ppid=3048 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:38.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:38.970994 kernel: audit: type=1327 audit(1769583398.935:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:40.281000 audit[3340]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:40.290972 kernel: audit: type=1325 audit(1769583400.281:538): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:40.281000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe2930e590 a2=0 a3=7ffe2930e57c items=0 ppid=3048 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:40.298330 kernel: audit: type=1300 audit(1769583400.281:538): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe2930e590 a2=0 a3=7ffe2930e57c items=0 ppid=3048 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:40.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:40.304108 kernel: audit: type=1327 audit(1769583400.281:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:40.307000 audit[3340]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:40.312973 kernel: audit: type=1325 audit(1769583400.307:539): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:40.307000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe2930e590 a2=0 a3=0 items=0 ppid=3048 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:40.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:40.361650 systemd[1]: Created slice kubepods-besteffort-pod30476136_e602_47ff_98c9_eddc41c51793.slice - libcontainer container kubepods-besteffort-pod30476136_e602_47ff_98c9_eddc41c51793.slice. Jan 28 06:56:40.519335 kubelet[2936]: I0128 06:56:40.519267 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/30476136-e602-47ff-98c9-eddc41c51793-typha-certs\") pod \"calico-typha-9d4c7dbf8-9k6s6\" (UID: \"30476136-e602-47ff-98c9-eddc41c51793\") " pod="calico-system/calico-typha-9d4c7dbf8-9k6s6" Jan 28 06:56:40.520556 systemd[1]: Created slice kubepods-besteffort-pode1701199_ccc5_453b_9ddf_23fa1b692db6.slice - libcontainer container kubepods-besteffort-pode1701199_ccc5_453b_9ddf_23fa1b692db6.slice. Jan 28 06:56:40.523698 kubelet[2936]: I0128 06:56:40.523486 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjsl\" (UniqueName: \"kubernetes.io/projected/30476136-e602-47ff-98c9-eddc41c51793-kube-api-access-lxjsl\") pod \"calico-typha-9d4c7dbf8-9k6s6\" (UID: \"30476136-e602-47ff-98c9-eddc41c51793\") " pod="calico-system/calico-typha-9d4c7dbf8-9k6s6" Jan 28 06:56:40.523698 kubelet[2936]: I0128 06:56:40.523575 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30476136-e602-47ff-98c9-eddc41c51793-tigera-ca-bundle\") pod \"calico-typha-9d4c7dbf8-9k6s6\" (UID: \"30476136-e602-47ff-98c9-eddc41c51793\") " pod="calico-system/calico-typha-9d4c7dbf8-9k6s6" Jan 28 06:56:40.625300 kubelet[2936]: I0128 06:56:40.625028 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-cni-net-dir\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.626562 kubelet[2936]: I0128 06:56:40.626066 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1701199-ccc5-453b-9ddf-23fa1b692db6-tigera-ca-bundle\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.626562 kubelet[2936]: I0128 06:56:40.626313 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-xtables-lock\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.627128 kubelet[2936]: I0128 06:56:40.626353 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-lib-modules\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.627128 kubelet[2936]: I0128 06:56:40.626891 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xcr\" (UniqueName: \"kubernetes.io/projected/e1701199-ccc5-453b-9ddf-23fa1b692db6-kube-api-access-r9xcr\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.629963 kubelet[2936]: I0128 06:56:40.627537 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-var-run-calico\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.629963 kubelet[2936]: I0128 06:56:40.629629 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-flexvol-driver-host\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.629963 kubelet[2936]: I0128 06:56:40.629737 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-cni-log-dir\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.629963 kubelet[2936]: I0128 06:56:40.629765 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-policysync\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.629963 kubelet[2936]: I0128 06:56:40.629810 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-var-lib-calico\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.630248 kubelet[2936]: I0128 06:56:40.629839 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e1701199-ccc5-453b-9ddf-23fa1b692db6-node-certs\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.630248 kubelet[2936]: I0128 06:56:40.629868 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e1701199-ccc5-453b-9ddf-23fa1b692db6-cni-bin-dir\") pod \"calico-node-9jfsj\" (UID: \"e1701199-ccc5-453b-9ddf-23fa1b692db6\") " pod="calico-system/calico-node-9jfsj" Jan 28 06:56:40.653622 kubelet[2936]: E0128 06:56:40.653271 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:40.705392 containerd[1638]: time="2026-01-28T06:56:40.705031271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d4c7dbf8-9k6s6,Uid:30476136-e602-47ff-98c9-eddc41c51793,Namespace:calico-system,Attempt:0,}" Jan 28 06:56:40.757986 kubelet[2936]: E0128 06:56:40.757677 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.757986 kubelet[2936]: W0128 06:56:40.757728 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.757986 kubelet[2936]: E0128 06:56:40.757785 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.761331 kubelet[2936]: E0128 06:56:40.761274 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.761535 kubelet[2936]: W0128 06:56:40.761296 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.761535 kubelet[2936]: E0128 06:56:40.761454 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.808585 containerd[1638]: time="2026-01-28T06:56:40.807871848Z" level=info msg="connecting to shim c6de65b1d17b8c36653019d641a52e5725143df940a625aeea767463b2597b47" address="unix:///run/containerd/s/370450c86224151ce0ea6a89ce89fa87040da4c6fb8a08c2191262797b459c6a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:56:40.812882 kubelet[2936]: E0128 06:56:40.812368 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.812882 kubelet[2936]: W0128 06:56:40.812400 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.812882 kubelet[2936]: E0128 06:56:40.812443 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.835207 kubelet[2936]: E0128 06:56:40.835160 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.835207 kubelet[2936]: W0128 06:56:40.835191 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.835207 kubelet[2936]: E0128 06:56:40.835222 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.837232 kubelet[2936]: I0128 06:56:40.836196 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/feae4718-ebbe-416f-b2aa-04c3e4a5379c-kubelet-dir\") pod \"csi-node-driver-4hjw4\" (UID: \"feae4718-ebbe-416f-b2aa-04c3e4a5379c\") " pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:56:40.837232 kubelet[2936]: E0128 06:56:40.837047 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.837232 kubelet[2936]: W0128 06:56:40.837068 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.837232 kubelet[2936]: E0128 06:56:40.837086 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.837645 containerd[1638]: time="2026-01-28T06:56:40.835266136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jfsj,Uid:e1701199-ccc5-453b-9ddf-23fa1b692db6,Namespace:calico-system,Attempt:0,}" Jan 28 06:56:40.838816 kubelet[2936]: E0128 06:56:40.838791 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.838816 kubelet[2936]: W0128 06:56:40.838814 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.838988 kubelet[2936]: E0128 06:56:40.838832 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.839828 kubelet[2936]: E0128 06:56:40.839802 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.839921 kubelet[2936]: W0128 06:56:40.839828 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.839921 kubelet[2936]: E0128 06:56:40.839863 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.840358 kubelet[2936]: I0128 06:56:40.840328 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbk7\" (UniqueName: \"kubernetes.io/projected/feae4718-ebbe-416f-b2aa-04c3e4a5379c-kube-api-access-9wbk7\") pod \"csi-node-driver-4hjw4\" (UID: \"feae4718-ebbe-416f-b2aa-04c3e4a5379c\") " pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:56:40.842044 kubelet[2936]: E0128 06:56:40.842006 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.842044 kubelet[2936]: W0128 06:56:40.842029 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.842183 kubelet[2936]: E0128 06:56:40.842046 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.842624 kubelet[2936]: E0128 06:56:40.842599 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.842624 kubelet[2936]: W0128 06:56:40.842622 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.843007 kubelet[2936]: E0128 06:56:40.842639 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.844840 kubelet[2936]: E0128 06:56:40.844815 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.844840 kubelet[2936]: W0128 06:56:40.844837 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.845814 kubelet[2936]: E0128 06:56:40.844854 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.845814 kubelet[2936]: I0128 06:56:40.844897 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/feae4718-ebbe-416f-b2aa-04c3e4a5379c-varrun\") pod \"csi-node-driver-4hjw4\" (UID: \"feae4718-ebbe-416f-b2aa-04c3e4a5379c\") " pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:56:40.846169 kubelet[2936]: E0128 06:56:40.846076 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.846169 kubelet[2936]: W0128 06:56:40.846094 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.846169 kubelet[2936]: E0128 06:56:40.846110 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.846169 kubelet[2936]: I0128 06:56:40.846156 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/feae4718-ebbe-416f-b2aa-04c3e4a5379c-registration-dir\") pod \"csi-node-driver-4hjw4\" (UID: \"feae4718-ebbe-416f-b2aa-04c3e4a5379c\") " pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:56:40.847319 kubelet[2936]: E0128 06:56:40.847295 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.847319 kubelet[2936]: W0128 06:56:40.847317 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.847426 kubelet[2936]: E0128 06:56:40.847335 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.848416 kubelet[2936]: I0128 06:56:40.848354 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/feae4718-ebbe-416f-b2aa-04c3e4a5379c-socket-dir\") pod \"csi-node-driver-4hjw4\" (UID: \"feae4718-ebbe-416f-b2aa-04c3e4a5379c\") " pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:56:40.849070 kubelet[2936]: E0128 06:56:40.848499 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.849070 kubelet[2936]: W0128 06:56:40.848513 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.849070 kubelet[2936]: E0128 06:56:40.848743 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.850387 kubelet[2936]: E0128 06:56:40.850359 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.850387 kubelet[2936]: W0128 06:56:40.850383 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.850722 kubelet[2936]: E0128 06:56:40.850400 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.851594 kubelet[2936]: E0128 06:56:40.851568 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.851594 kubelet[2936]: W0128 06:56:40.851590 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.851594 kubelet[2936]: E0128 06:56:40.851606 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.852878 kubelet[2936]: E0128 06:56:40.852777 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.852878 kubelet[2936]: W0128 06:56:40.852800 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.852878 kubelet[2936]: E0128 06:56:40.852820 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.854168 kubelet[2936]: E0128 06:56:40.854113 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.854168 kubelet[2936]: W0128 06:56:40.854136 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.854168 kubelet[2936]: E0128 06:56:40.854153 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.855613 kubelet[2936]: E0128 06:56:40.855546 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.856016 kubelet[2936]: W0128 06:56:40.855986 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.856258 kubelet[2936]: E0128 06:56:40.856018 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.892742 containerd[1638]: time="2026-01-28T06:56:40.890615321Z" level=info msg="connecting to shim 754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c" address="unix:///run/containerd/s/35f3d247e14a96f14cc649f5841db95363f225ad5a023c0cc04d1f23ec983ec9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:56:40.950769 kubelet[2936]: E0128 06:56:40.950694 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.950769 kubelet[2936]: W0128 06:56:40.950731 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.950769 kubelet[2936]: E0128 06:56:40.950764 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.952493 kubelet[2936]: E0128 06:56:40.952455 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.952594 kubelet[2936]: W0128 06:56:40.952502 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.952594 kubelet[2936]: E0128 06:56:40.952523 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.954355 kubelet[2936]: E0128 06:56:40.954331 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.956323 kubelet[2936]: W0128 06:56:40.955403 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.956323 kubelet[2936]: E0128 06:56:40.955460 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.956323 kubelet[2936]: E0128 06:56:40.956045 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.956323 kubelet[2936]: W0128 06:56:40.956075 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.956323 kubelet[2936]: E0128 06:56:40.956092 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.957206 kubelet[2936]: E0128 06:56:40.957177 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.957206 kubelet[2936]: W0128 06:56:40.957198 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.957327 kubelet[2936]: E0128 06:56:40.957215 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.958266 kubelet[2936]: E0128 06:56:40.958168 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.958266 kubelet[2936]: W0128 06:56:40.958193 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.958266 kubelet[2936]: E0128 06:56:40.958210 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.959513 kubelet[2936]: E0128 06:56:40.959484 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.959513 kubelet[2936]: W0128 06:56:40.959505 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.959861 kubelet[2936]: E0128 06:56:40.959521 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.961918 kubelet[2936]: E0128 06:56:40.961297 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.961918 kubelet[2936]: W0128 06:56:40.961318 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.961918 kubelet[2936]: E0128 06:56:40.961335 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.963149 kubelet[2936]: E0128 06:56:40.963119 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.963149 kubelet[2936]: W0128 06:56:40.963141 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.963992 kubelet[2936]: E0128 06:56:40.963158 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.964836 kubelet[2936]: E0128 06:56:40.964797 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.964836 kubelet[2936]: W0128 06:56:40.964820 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.965039 kubelet[2936]: E0128 06:56:40.964840 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.966886 kubelet[2936]: E0128 06:56:40.966398 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.966886 kubelet[2936]: W0128 06:56:40.966420 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.966886 kubelet[2936]: E0128 06:56:40.966450 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.969237 kubelet[2936]: E0128 06:56:40.968121 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.969237 kubelet[2936]: W0128 06:56:40.968144 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.969237 kubelet[2936]: E0128 06:56:40.968212 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.969381 kubelet[2936]: E0128 06:56:40.969258 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.969381 kubelet[2936]: W0128 06:56:40.969273 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.969381 kubelet[2936]: E0128 06:56:40.969289 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.972805 kubelet[2936]: E0128 06:56:40.970936 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.972805 kubelet[2936]: W0128 06:56:40.971089 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.972805 kubelet[2936]: E0128 06:56:40.971107 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.972805 kubelet[2936]: E0128 06:56:40.971588 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.972805 kubelet[2936]: W0128 06:56:40.971601 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.972805 kubelet[2936]: E0128 06:56:40.971617 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.972805 kubelet[2936]: E0128 06:56:40.972478 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.972805 kubelet[2936]: W0128 06:56:40.972492 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.972805 kubelet[2936]: E0128 06:56:40.972622 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.975305 systemd[1]: Started cri-containerd-754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c.scope - libcontainer container 754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c. Jan 28 06:56:40.982600 kubelet[2936]: E0128 06:56:40.981667 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.984457 kubelet[2936]: W0128 06:56:40.984139 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.984457 kubelet[2936]: E0128 06:56:40.984174 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.984878 kubelet[2936]: E0128 06:56:40.984668 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.985179 kubelet[2936]: W0128 06:56:40.985157 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.986057 kubelet[2936]: E0128 06:56:40.985489 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.987156 kubelet[2936]: E0128 06:56:40.986507 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.987156 kubelet[2936]: W0128 06:56:40.986526 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.987156 kubelet[2936]: E0128 06:56:40.986543 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.988353 kubelet[2936]: E0128 06:56:40.988134 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.988353 kubelet[2936]: W0128 06:56:40.988154 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.988353 kubelet[2936]: E0128 06:56:40.988170 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.989481 kubelet[2936]: E0128 06:56:40.988958 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.989481 kubelet[2936]: W0128 06:56:40.989085 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.989481 kubelet[2936]: E0128 06:56:40.989102 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.990624 kubelet[2936]: E0128 06:56:40.990357 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.990624 kubelet[2936]: W0128 06:56:40.990376 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.990624 kubelet[2936]: E0128 06:56:40.990399 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.993125 kubelet[2936]: E0128 06:56:40.992701 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.993125 kubelet[2936]: W0128 06:56:40.992721 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.993125 kubelet[2936]: E0128 06:56:40.992738 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.994615 kubelet[2936]: E0128 06:56:40.994125 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.994615 kubelet[2936]: W0128 06:56:40.994325 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.994615 kubelet[2936]: E0128 06:56:40.994344 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:40.997300 kubelet[2936]: E0128 06:56:40.997043 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:40.997300 kubelet[2936]: W0128 06:56:40.997083 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:40.997300 kubelet[2936]: E0128 06:56:40.997101 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:41.014367 systemd[1]: Started cri-containerd-c6de65b1d17b8c36653019d641a52e5725143df940a625aeea767463b2597b47.scope - libcontainer container c6de65b1d17b8c36653019d641a52e5725143df940a625aeea767463b2597b47. Jan 28 06:56:41.030645 kubelet[2936]: E0128 06:56:41.030367 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:56:41.030901 kubelet[2936]: W0128 06:56:41.030872 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:56:41.031673 kubelet[2936]: E0128 06:56:41.031647 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:56:41.073000 audit: BPF prog-id=155 op=LOAD Jan 28 06:56:41.075000 audit: BPF prog-id=156 op=LOAD Jan 28 06:56:41.075000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3356 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336646536356231643137623863333636353330313964363431613532 Jan 28 06:56:41.076000 audit: BPF prog-id=156 op=UNLOAD Jan 28 06:56:41.076000 audit[3382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336646536356231643137623863333636353330313964363431613532 Jan 28 06:56:41.077000 audit: BPF prog-id=157 op=LOAD Jan 28 06:56:41.077000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3356 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336646536356231643137623863333636353330313964363431613532 Jan 28 06:56:41.077000 audit: BPF prog-id=158 op=LOAD Jan 28 06:56:41.078000 audit: BPF prog-id=159 op=LOAD Jan 28 06:56:41.078000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3356 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336646536356231643137623863333636353330313964363431613532 Jan 28 06:56:41.078000 audit: BPF prog-id=159 op=UNLOAD Jan 28 06:56:41.078000 audit[3382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336646536356231643137623863333636353330313964363431613532 Jan 28 06:56:41.078000 audit: BPF prog-id=157 op=UNLOAD Jan 28 06:56:41.078000 audit[3382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336646536356231643137623863333636353330313964363431613532 Jan 28 06:56:41.079000 audit: BPF prog-id=160 op=LOAD Jan 28 06:56:41.079000 audit[3403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3393 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343639346264613731323465316434383865646337343138376163 Jan 28 06:56:41.079000 audit: BPF prog-id=160 op=UNLOAD Jan 28 06:56:41.079000 audit[3403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343639346264613731323465316434383865646337343138376163 Jan 28 06:56:41.079000 audit: BPF prog-id=161 op=LOAD Jan 28 06:56:41.079000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3356 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336646536356231643137623863333636353330313964363431613532 Jan 28 06:56:41.080000 audit: BPF prog-id=162 op=LOAD Jan 28 06:56:41.080000 audit[3403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3393 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343639346264613731323465316434383865646337343138376163 Jan 28 06:56:41.081000 audit: BPF prog-id=163 op=LOAD Jan 28 06:56:41.081000 audit[3403]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3393 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343639346264613731323465316434383865646337343138376163 Jan 28 06:56:41.081000 audit: BPF prog-id=163 op=UNLOAD Jan 28 06:56:41.081000 audit[3403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343639346264613731323465316434383865646337343138376163 Jan 28 06:56:41.081000 audit: BPF prog-id=162 op=UNLOAD Jan 28 06:56:41.081000 audit[3403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343639346264613731323465316434383865646337343138376163 Jan 28 06:56:41.081000 audit: BPF prog-id=164 op=LOAD Jan 28 06:56:41.081000 audit[3403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3393 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343639346264613731323465316434383865646337343138376163 Jan 28 06:56:41.151774 containerd[1638]: time="2026-01-28T06:56:41.151483352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jfsj,Uid:e1701199-ccc5-453b-9ddf-23fa1b692db6,Namespace:calico-system,Attempt:0,} returns sandbox id \"754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c\"" Jan 28 06:56:41.162831 containerd[1638]: time="2026-01-28T06:56:41.162783203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 06:56:41.197982 containerd[1638]: time="2026-01-28T06:56:41.197833753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d4c7dbf8-9k6s6,Uid:30476136-e602-47ff-98c9-eddc41c51793,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6de65b1d17b8c36653019d641a52e5725143df940a625aeea767463b2597b47\"" Jan 28 06:56:41.327000 audit[3483]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:41.327000 audit[3483]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc7074f670 a2=0 a3=7ffc7074f65c items=0 ppid=3048 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:41.331000 audit[3483]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:56:41.331000 audit[3483]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7074f670 a2=0 a3=0 items=0 ppid=3048 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:41.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:56:42.771118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954611881.mount: Deactivated successfully. Jan 28 06:56:42.937673 containerd[1638]: time="2026-01-28T06:56:42.937612421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:42.938889 containerd[1638]: time="2026-01-28T06:56:42.938729253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 28 06:56:42.939657 containerd[1638]: time="2026-01-28T06:56:42.939616751Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:42.942342 containerd[1638]: time="2026-01-28T06:56:42.942302033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:42.943481 containerd[1638]: time="2026-01-28T06:56:42.943440883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.780584511s" Jan 28 06:56:42.943633 containerd[1638]: time="2026-01-28T06:56:42.943602891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 06:56:42.945275 containerd[1638]: time="2026-01-28T06:56:42.945233056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 06:56:42.953225 containerd[1638]: time="2026-01-28T06:56:42.953185814Z" level=info msg="CreateContainer within sandbox \"754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 06:56:42.965900 containerd[1638]: time="2026-01-28T06:56:42.965864065Z" level=info msg="Container dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:42.986039 kubelet[2936]: E0128 06:56:42.985990 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:42.995740 containerd[1638]: time="2026-01-28T06:56:42.995689664Z" level=info msg="CreateContainer within sandbox \"754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481\"" Jan 28 06:56:42.997309 containerd[1638]: time="2026-01-28T06:56:42.997245959Z" level=info msg="StartContainer for \"dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481\"" Jan 28 06:56:43.001278 containerd[1638]: time="2026-01-28T06:56:43.001174684Z" level=info msg="connecting to shim dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481" address="unix:///run/containerd/s/35f3d247e14a96f14cc649f5841db95363f225ad5a023c0cc04d1f23ec983ec9" protocol=ttrpc version=3 Jan 28 06:56:43.046210 systemd[1]: Started cri-containerd-dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481.scope - libcontainer container dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481. Jan 28 06:56:43.142000 audit: BPF prog-id=165 op=LOAD Jan 28 06:56:43.142000 audit[3492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3393 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:43.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656665636537306532356166373239623737356462643562383135 Jan 28 06:56:43.142000 audit: BPF prog-id=166 op=LOAD Jan 28 06:56:43.142000 audit[3492]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3393 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:43.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656665636537306532356166373239623737356462643562383135 Jan 28 06:56:43.142000 audit: BPF prog-id=166 op=UNLOAD Jan 28 06:56:43.142000 audit[3492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:43.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656665636537306532356166373239623737356462643562383135 Jan 28 06:56:43.142000 audit: BPF prog-id=165 op=UNLOAD Jan 28 06:56:43.142000 audit[3492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:43.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656665636537306532356166373239623737356462643562383135 Jan 28 06:56:43.142000 audit: BPF prog-id=167 op=LOAD Jan 28 06:56:43.142000 audit[3492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3393 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:43.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656665636537306532356166373239623737356462643562383135 Jan 28 06:56:43.181047 containerd[1638]: time="2026-01-28T06:56:43.180982265Z" level=info msg="StartContainer for \"dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481\" returns successfully" Jan 28 06:56:43.202382 systemd[1]: cri-containerd-dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481.scope: Deactivated successfully. Jan 28 06:56:43.204000 audit: BPF prog-id=167 op=UNLOAD Jan 28 06:56:43.243627 containerd[1638]: time="2026-01-28T06:56:43.243405809Z" level=info msg="received container exit event container_id:\"dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481\" id:\"dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481\" pid:3505 exited_at:{seconds:1769583403 nanos:207620395}" Jan 28 06:56:43.283360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcefece70e25af729b775dbd5b815a9ba2ba0d8b37357177f666e557db67e481-rootfs.mount: Deactivated successfully. Jan 28 06:56:44.978971 kubelet[2936]: E0128 06:56:44.976480 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:46.982983 kubelet[2936]: E0128 06:56:46.979269 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:47.500255 containerd[1638]: time="2026-01-28T06:56:47.500190848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:47.508560 containerd[1638]: time="2026-01-28T06:56:47.508484567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 28 06:56:47.513129 containerd[1638]: time="2026-01-28T06:56:47.513062258Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:47.517237 containerd[1638]: time="2026-01-28T06:56:47.517156201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:47.518846 containerd[1638]: time="2026-01-28T06:56:47.518787370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.573212281s" Jan 28 06:56:47.518846 containerd[1638]: time="2026-01-28T06:56:47.518836325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 06:56:47.522029 containerd[1638]: time="2026-01-28T06:56:47.521753971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 06:56:47.637431 containerd[1638]: time="2026-01-28T06:56:47.637359149Z" level=info msg="CreateContainer within sandbox \"c6de65b1d17b8c36653019d641a52e5725143df940a625aeea767463b2597b47\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 06:56:47.648996 containerd[1638]: time="2026-01-28T06:56:47.647521648Z" level=info msg="Container a468a98f968a90e3baa8f9594ebf3e4086d0c9415490e1d3ff29698c9f7d7852: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:47.659862 containerd[1638]: time="2026-01-28T06:56:47.659797135Z" level=info msg="CreateContainer within sandbox \"c6de65b1d17b8c36653019d641a52e5725143df940a625aeea767463b2597b47\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a468a98f968a90e3baa8f9594ebf3e4086d0c9415490e1d3ff29698c9f7d7852\"" Jan 28 06:56:47.661757 containerd[1638]: time="2026-01-28T06:56:47.661532429Z" level=info msg="StartContainer for \"a468a98f968a90e3baa8f9594ebf3e4086d0c9415490e1d3ff29698c9f7d7852\"" Jan 28 06:56:47.664653 containerd[1638]: time="2026-01-28T06:56:47.664562089Z" level=info msg="connecting to shim a468a98f968a90e3baa8f9594ebf3e4086d0c9415490e1d3ff29698c9f7d7852" address="unix:///run/containerd/s/370450c86224151ce0ea6a89ce89fa87040da4c6fb8a08c2191262797b459c6a" protocol=ttrpc version=3 Jan 28 06:56:47.702357 systemd[1]: Started cri-containerd-a468a98f968a90e3baa8f9594ebf3e4086d0c9415490e1d3ff29698c9f7d7852.scope - libcontainer container a468a98f968a90e3baa8f9594ebf3e4086d0c9415490e1d3ff29698c9f7d7852. Jan 28 06:56:47.735000 audit: BPF prog-id=168 op=LOAD Jan 28 06:56:47.740449 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 28 06:56:47.740631 kernel: audit: type=1334 audit(1769583407.735:564): prog-id=168 op=LOAD Jan 28 06:56:47.743000 audit: BPF prog-id=169 op=LOAD Jan 28 06:56:47.743000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.748411 kernel: audit: type=1334 audit(1769583407.743:565): prog-id=169 op=LOAD Jan 28 06:56:47.748496 kernel: audit: type=1300 audit(1769583407.743:565): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.753885 kernel: audit: type=1327 audit(1769583407.743:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.744000 audit: BPF prog-id=169 op=UNLOAD Jan 28 06:56:47.758760 kernel: audit: type=1334 audit(1769583407.744:566): prog-id=169 op=UNLOAD Jan 28 06:56:47.744000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.764985 kernel: audit: type=1300 audit(1769583407.744:566): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.774970 kernel: audit: type=1327 audit(1769583407.744:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.744000 audit: BPF prog-id=170 op=LOAD Jan 28 06:56:47.777966 kernel: audit: type=1334 audit(1769583407.744:567): prog-id=170 op=LOAD Jan 28 06:56:47.744000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.783964 kernel: audit: type=1300 audit(1769583407.744:567): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.792984 kernel: audit: type=1327 audit(1769583407.744:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.744000 audit: BPF prog-id=171 op=LOAD Jan 28 06:56:47.744000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.744000 audit: BPF prog-id=171 op=UNLOAD Jan 28 06:56:47.744000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.744000 audit: BPF prog-id=170 op=UNLOAD Jan 28 06:56:47.744000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.744000 audit: BPF prog-id=172 op=LOAD Jan 28 06:56:47.744000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3356 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:47.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363861393866393638613930653362616138663935393465626633 Jan 28 06:56:47.847483 containerd[1638]: time="2026-01-28T06:56:47.847407074Z" level=info msg="StartContainer for \"a468a98f968a90e3baa8f9594ebf3e4086d0c9415490e1d3ff29698c9f7d7852\" returns successfully" Jan 28 06:56:48.214537 kubelet[2936]: I0128 06:56:48.214296 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9d4c7dbf8-9k6s6" podStartSLOduration=1.893770087 podStartE2EDuration="8.214251727s" podCreationTimestamp="2026-01-28 06:56:40 +0000 UTC" firstStartedPulling="2026-01-28 06:56:41.200761203 +0000 UTC m=+26.437418709" lastFinishedPulling="2026-01-28 06:56:47.521242824 +0000 UTC m=+32.757900349" observedRunningTime="2026-01-28 06:56:48.213359569 +0000 UTC m=+33.450017103" watchObservedRunningTime="2026-01-28 06:56:48.214251727 +0000 UTC m=+33.450909239" Jan 28 06:56:48.978355 kubelet[2936]: E0128 06:56:48.978150 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:49.182668 kubelet[2936]: I0128 06:56:49.181992 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:56:50.978603 kubelet[2936]: E0128 06:56:50.976204 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:52.977066 kubelet[2936]: E0128 06:56:52.976702 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:54.485693 containerd[1638]: time="2026-01-28T06:56:54.485523811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:54.488032 containerd[1638]: time="2026-01-28T06:56:54.487992823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 06:56:54.488988 containerd[1638]: time="2026-01-28T06:56:54.488878770Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:54.492186 containerd[1638]: time="2026-01-28T06:56:54.491520825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:56:54.492979 containerd[1638]: time="2026-01-28T06:56:54.492683879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.970886027s" Jan 28 06:56:54.492979 containerd[1638]: time="2026-01-28T06:56:54.492729000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 06:56:54.497972 containerd[1638]: time="2026-01-28T06:56:54.497360075Z" level=info msg="CreateContainer within sandbox \"754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 06:56:54.512068 containerd[1638]: time="2026-01-28T06:56:54.510171520Z" level=info msg="Container 36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:56:54.557455 containerd[1638]: time="2026-01-28T06:56:54.557397080Z" level=info msg="CreateContainer within sandbox \"754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20\"" Jan 28 06:56:54.558738 containerd[1638]: time="2026-01-28T06:56:54.558700816Z" level=info msg="StartContainer for \"36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20\"" Jan 28 06:56:54.562929 containerd[1638]: time="2026-01-28T06:56:54.562818333Z" level=info msg="connecting to shim 36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20" address="unix:///run/containerd/s/35f3d247e14a96f14cc649f5841db95363f225ad5a023c0cc04d1f23ec983ec9" protocol=ttrpc version=3 Jan 28 06:56:54.637231 systemd[1]: Started cri-containerd-36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20.scope - libcontainer container 36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20. Jan 28 06:56:54.729095 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 06:56:54.729415 kernel: audit: type=1334 audit(1769583414.723:572): prog-id=173 op=LOAD Jan 28 06:56:54.723000 audit: BPF prog-id=173 op=LOAD Jan 28 06:56:54.723000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.731819 kernel: audit: type=1300 audit(1769583414.723:572): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.737170 kernel: audit: type=1327 audit(1769583414.723:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.724000 audit: BPF prog-id=174 op=LOAD Jan 28 06:56:54.741138 kernel: audit: type=1334 audit(1769583414.724:573): prog-id=174 op=LOAD Jan 28 06:56:54.724000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.744311 kernel: audit: type=1300 audit(1769583414.724:573): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.749059 kernel: audit: type=1327 audit(1769583414.724:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.724000 audit: BPF prog-id=174 op=UNLOAD Jan 28 06:56:54.753255 kernel: audit: type=1334 audit(1769583414.724:574): prog-id=174 op=UNLOAD Jan 28 06:56:54.759414 kernel: audit: type=1300 audit(1769583414.724:574): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.724000 audit[3596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.767711 kernel: audit: type=1327 audit(1769583414.724:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.726000 audit: BPF prog-id=173 op=UNLOAD Jan 28 06:56:54.726000 audit[3596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.726000 audit: BPF prog-id=175 op=LOAD Jan 28 06:56:54.772014 kernel: audit: type=1334 audit(1769583414.726:575): prog-id=173 op=UNLOAD Jan 28 06:56:54.726000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3393 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:56:54.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336616638373735303734613130613632656430393461663635323932 Jan 28 06:56:54.802333 containerd[1638]: time="2026-01-28T06:56:54.802269875Z" level=info msg="StartContainer for \"36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20\" returns successfully" Jan 28 06:56:54.977665 kubelet[2936]: E0128 06:56:54.977588 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:55.944794 systemd[1]: cri-containerd-36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20.scope: Deactivated successfully. Jan 28 06:56:55.947105 systemd[1]: cri-containerd-36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20.scope: Consumed 831ms CPU time, 162.4M memory peak, 8M read from disk, 171.3M written to disk. Jan 28 06:56:55.949000 audit: BPF prog-id=175 op=UNLOAD Jan 28 06:56:55.951616 containerd[1638]: time="2026-01-28T06:56:55.951523894Z" level=info msg="received container exit event container_id:\"36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20\" id:\"36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20\" pid:3609 exited_at:{seconds:1769583415 nanos:950907579}" Jan 28 06:56:56.019756 kubelet[2936]: I0128 06:56:56.019699 2936 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 06:56:56.021582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36af8775074a10a62ed094af65292d78f313c420d0adfc07d63cc38a824b7e20-rootfs.mount: Deactivated successfully. Jan 28 06:56:56.181420 systemd[1]: Created slice kubepods-burstable-pod48297015_9ea4_408f_b23a_bc18759d877d.slice - libcontainer container kubepods-burstable-pod48297015_9ea4_408f_b23a_bc18759d877d.slice. Jan 28 06:56:56.196661 systemd[1]: Created slice kubepods-besteffort-pod11f0b3ec_48a9_43d4_ba78_4be405b03a1e.slice - libcontainer container kubepods-besteffort-pod11f0b3ec_48a9_43d4_ba78_4be405b03a1e.slice. Jan 28 06:56:56.201168 kubelet[2936]: I0128 06:56:56.200793 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xvl\" (UniqueName: \"kubernetes.io/projected/48297015-9ea4-408f-b23a-bc18759d877d-kube-api-access-j9xvl\") pod \"coredns-674b8bbfcf-l9g7w\" (UID: \"48297015-9ea4-408f-b23a-bc18759d877d\") " pod="kube-system/coredns-674b8bbfcf-l9g7w" Jan 28 06:56:56.205063 kubelet[2936]: I0128 06:56:56.205022 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1023d10f-af49-4cbc-b6ed-d31a2d3bba42-calico-apiserver-certs\") pod \"calico-apiserver-6bf65c897b-bbmbt\" (UID: \"1023d10f-af49-4cbc-b6ed-d31a2d3bba42\") " pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" Jan 28 06:56:56.205228 kubelet[2936]: I0128 06:56:56.205198 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422c5240-419b-4316-b493-4cac140b34f1-whisker-ca-bundle\") pod \"whisker-5b984fd56f-sfgfj\" (UID: \"422c5240-419b-4316-b493-4cac140b34f1\") " pod="calico-system/whisker-5b984fd56f-sfgfj" Jan 28 06:56:56.205359 kubelet[2936]: I0128 06:56:56.205335 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3b161cf-39ef-4b57-bdfb-9046b0dd729b-goldmane-ca-bundle\") pod \"goldmane-666569f655-9zxml\" (UID: \"b3b161cf-39ef-4b57-bdfb-9046b0dd729b\") " pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:56:56.205513 kubelet[2936]: I0128 06:56:56.205488 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/422c5240-419b-4316-b493-4cac140b34f1-whisker-backend-key-pair\") pod \"whisker-5b984fd56f-sfgfj\" (UID: \"422c5240-419b-4316-b493-4cac140b34f1\") " pod="calico-system/whisker-5b984fd56f-sfgfj" Jan 28 06:56:56.205651 kubelet[2936]: I0128 06:56:56.205623 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcnv\" (UniqueName: \"kubernetes.io/projected/f8fa255f-5f12-4516-9e66-e8ceb04f1aa0-kube-api-access-pxcnv\") pod \"coredns-674b8bbfcf-bg29g\" (UID: \"f8fa255f-5f12-4516-9e66-e8ceb04f1aa0\") " pod="kube-system/coredns-674b8bbfcf-bg29g" Jan 28 06:56:56.207001 kubelet[2936]: I0128 06:56:56.206163 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/11f0b3ec-48a9-43d4-ba78-4be405b03a1e-calico-apiserver-certs\") pod \"calico-apiserver-6bf65c897b-gv7nr\" (UID: \"11f0b3ec-48a9-43d4-ba78-4be405b03a1e\") " pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" Jan 28 06:56:56.207001 kubelet[2936]: I0128 06:56:56.206239 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfff\" (UniqueName: \"kubernetes.io/projected/422c5240-419b-4316-b493-4cac140b34f1-kube-api-access-2zfff\") pod \"whisker-5b984fd56f-sfgfj\" (UID: \"422c5240-419b-4316-b493-4cac140b34f1\") " pod="calico-system/whisker-5b984fd56f-sfgfj" Jan 28 06:56:56.207001 kubelet[2936]: I0128 06:56:56.206320 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6hr2\" (UniqueName: \"kubernetes.io/projected/c1237d78-1650-42b9-ac4f-842b943ada74-kube-api-access-s6hr2\") pod \"calico-kube-controllers-74fbf496f-6pckk\" (UID: \"c1237d78-1650-42b9-ac4f-842b943ada74\") " pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" Jan 28 06:56:56.207001 kubelet[2936]: I0128 06:56:56.206410 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8fa255f-5f12-4516-9e66-e8ceb04f1aa0-config-volume\") pod \"coredns-674b8bbfcf-bg29g\" (UID: \"f8fa255f-5f12-4516-9e66-e8ceb04f1aa0\") " pod="kube-system/coredns-674b8bbfcf-bg29g" Jan 28 06:56:56.207001 kubelet[2936]: I0128 06:56:56.206460 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrfx\" (UniqueName: \"kubernetes.io/projected/1023d10f-af49-4cbc-b6ed-d31a2d3bba42-kube-api-access-5vrfx\") pod \"calico-apiserver-6bf65c897b-bbmbt\" (UID: \"1023d10f-af49-4cbc-b6ed-d31a2d3bba42\") " pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" Jan 28 06:56:56.207304 kubelet[2936]: I0128 06:56:56.206493 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b161cf-39ef-4b57-bdfb-9046b0dd729b-config\") pod \"goldmane-666569f655-9zxml\" (UID: \"b3b161cf-39ef-4b57-bdfb-9046b0dd729b\") " pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:56:56.207304 kubelet[2936]: I0128 06:56:56.206542 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48297015-9ea4-408f-b23a-bc18759d877d-config-volume\") pod \"coredns-674b8bbfcf-l9g7w\" (UID: \"48297015-9ea4-408f-b23a-bc18759d877d\") " pod="kube-system/coredns-674b8bbfcf-l9g7w" Jan 28 06:56:56.207304 kubelet[2936]: I0128 06:56:56.206570 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1237d78-1650-42b9-ac4f-842b943ada74-tigera-ca-bundle\") pod \"calico-kube-controllers-74fbf496f-6pckk\" (UID: \"c1237d78-1650-42b9-ac4f-842b943ada74\") " pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" Jan 28 06:56:56.207304 kubelet[2936]: I0128 06:56:56.206598 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2x2\" (UniqueName: \"kubernetes.io/projected/11f0b3ec-48a9-43d4-ba78-4be405b03a1e-kube-api-access-7r2x2\") pod \"calico-apiserver-6bf65c897b-gv7nr\" (UID: \"11f0b3ec-48a9-43d4-ba78-4be405b03a1e\") " pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" Jan 28 06:56:56.207304 kubelet[2936]: I0128 06:56:56.206631 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b161cf-39ef-4b57-bdfb-9046b0dd729b-goldmane-key-pair\") pod \"goldmane-666569f655-9zxml\" (UID: \"b3b161cf-39ef-4b57-bdfb-9046b0dd729b\") " pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:56:56.207533 kubelet[2936]: I0128 06:56:56.206662 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5hq\" (UniqueName: \"kubernetes.io/projected/b3b161cf-39ef-4b57-bdfb-9046b0dd729b-kube-api-access-qr5hq\") pod \"goldmane-666569f655-9zxml\" (UID: \"b3b161cf-39ef-4b57-bdfb-9046b0dd729b\") " pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:56:56.222266 systemd[1]: Created slice kubepods-burstable-podf8fa255f_5f12_4516_9e66_e8ceb04f1aa0.slice - libcontainer container kubepods-burstable-podf8fa255f_5f12_4516_9e66_e8ceb04f1aa0.slice. Jan 28 06:56:56.238729 systemd[1]: Created slice kubepods-besteffort-pod422c5240_419b_4316_b493_4cac140b34f1.slice - libcontainer container kubepods-besteffort-pod422c5240_419b_4316_b493_4cac140b34f1.slice. Jan 28 06:56:56.240970 containerd[1638]: time="2026-01-28T06:56:56.240022521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 06:56:56.258169 systemd[1]: Created slice kubepods-besteffort-podb3b161cf_39ef_4b57_bdfb_9046b0dd729b.slice - libcontainer container kubepods-besteffort-podb3b161cf_39ef_4b57_bdfb_9046b0dd729b.slice. Jan 28 06:56:56.275410 systemd[1]: Created slice kubepods-besteffort-pod1023d10f_af49_4cbc_b6ed_d31a2d3bba42.slice - libcontainer container kubepods-besteffort-pod1023d10f_af49_4cbc_b6ed_d31a2d3bba42.slice. Jan 28 06:56:56.293413 systemd[1]: Created slice kubepods-besteffort-podc1237d78_1650_42b9_ac4f_842b943ada74.slice - libcontainer container kubepods-besteffort-podc1237d78_1650_42b9_ac4f_842b943ada74.slice. Jan 28 06:56:56.495087 containerd[1638]: time="2026-01-28T06:56:56.494249965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9g7w,Uid:48297015-9ea4-408f-b23a-bc18759d877d,Namespace:kube-system,Attempt:0,}" Jan 28 06:56:56.522284 containerd[1638]: time="2026-01-28T06:56:56.521328173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-gv7nr,Uid:11f0b3ec-48a9-43d4-ba78-4be405b03a1e,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:56:56.538352 containerd[1638]: time="2026-01-28T06:56:56.538287377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg29g,Uid:f8fa255f-5f12-4516-9e66-e8ceb04f1aa0,Namespace:kube-system,Attempt:0,}" Jan 28 06:56:56.581540 containerd[1638]: time="2026-01-28T06:56:56.581302963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9zxml,Uid:b3b161cf-39ef-4b57-bdfb-9046b0dd729b,Namespace:calico-system,Attempt:0,}" Jan 28 06:56:56.582896 containerd[1638]: time="2026-01-28T06:56:56.582851784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b984fd56f-sfgfj,Uid:422c5240-419b-4316-b493-4cac140b34f1,Namespace:calico-system,Attempt:0,}" Jan 28 06:56:56.589678 containerd[1638]: time="2026-01-28T06:56:56.589633947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-bbmbt,Uid:1023d10f-af49-4cbc-b6ed-d31a2d3bba42,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:56:56.602592 containerd[1638]: time="2026-01-28T06:56:56.602364153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74fbf496f-6pckk,Uid:c1237d78-1650-42b9-ac4f-842b943ada74,Namespace:calico-system,Attempt:0,}" Jan 28 06:56:56.983528 containerd[1638]: time="2026-01-28T06:56:56.982184203Z" level=error msg="Failed to destroy network for sandbox \"ae0f8f095c958e5489cc63723a78baf35bc686cb52b69a852f2946e60abac79d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:56.986285 systemd[1]: Created slice kubepods-besteffort-podfeae4718_ebbe_416f_b2aa_04c3e4a5379c.slice - libcontainer container kubepods-besteffort-podfeae4718_ebbe_416f_b2aa_04c3e4a5379c.slice. Jan 28 06:56:56.994738 containerd[1638]: time="2026-01-28T06:56:56.994443447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjw4,Uid:feae4718-ebbe-416f-b2aa-04c3e4a5379c,Namespace:calico-system,Attempt:0,}" Jan 28 06:56:56.996491 containerd[1638]: time="2026-01-28T06:56:56.996454880Z" level=error msg="Failed to destroy network for sandbox \"7f790859be37d4f80b946c4801962371ecc36ed2201297c1047f3f9e69b1aa86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:56.997749 containerd[1638]: time="2026-01-28T06:56:56.997666958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b984fd56f-sfgfj,Uid:422c5240-419b-4316-b493-4cac140b34f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae0f8f095c958e5489cc63723a78baf35bc686cb52b69a852f2946e60abac79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.000148 kubelet[2936]: E0128 06:56:57.000049 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae0f8f095c958e5489cc63723a78baf35bc686cb52b69a852f2946e60abac79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.002193 kubelet[2936]: E0128 06:56:57.000197 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae0f8f095c958e5489cc63723a78baf35bc686cb52b69a852f2946e60abac79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b984fd56f-sfgfj" Jan 28 06:56:57.002193 kubelet[2936]: E0128 06:56:57.000248 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae0f8f095c958e5489cc63723a78baf35bc686cb52b69a852f2946e60abac79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b984fd56f-sfgfj" Jan 28 06:56:57.002193 kubelet[2936]: E0128 06:56:57.000345 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b984fd56f-sfgfj_calico-system(422c5240-419b-4316-b493-4cac140b34f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b984fd56f-sfgfj_calico-system(422c5240-419b-4316-b493-4cac140b34f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae0f8f095c958e5489cc63723a78baf35bc686cb52b69a852f2946e60abac79d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b984fd56f-sfgfj" podUID="422c5240-419b-4316-b493-4cac140b34f1" Jan 28 06:56:57.003881 containerd[1638]: time="2026-01-28T06:56:57.003266259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9g7w,Uid:48297015-9ea4-408f-b23a-bc18759d877d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f790859be37d4f80b946c4801962371ecc36ed2201297c1047f3f9e69b1aa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.014803 kubelet[2936]: E0128 06:56:57.014654 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f790859be37d4f80b946c4801962371ecc36ed2201297c1047f3f9e69b1aa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.015345 kubelet[2936]: E0128 06:56:57.014892 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f790859be37d4f80b946c4801962371ecc36ed2201297c1047f3f9e69b1aa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l9g7w" Jan 28 06:56:57.015345 kubelet[2936]: E0128 06:56:57.015153 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f790859be37d4f80b946c4801962371ecc36ed2201297c1047f3f9e69b1aa86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l9g7w" Jan 28 06:56:57.017093 kubelet[2936]: E0128 06:56:57.017000 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l9g7w_kube-system(48297015-9ea4-408f-b23a-bc18759d877d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l9g7w_kube-system(48297015-9ea4-408f-b23a-bc18759d877d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f790859be37d4f80b946c4801962371ecc36ed2201297c1047f3f9e69b1aa86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l9g7w" podUID="48297015-9ea4-408f-b23a-bc18759d877d" Jan 28 06:56:57.054304 containerd[1638]: time="2026-01-28T06:56:57.054210428Z" level=error msg="Failed to destroy network for sandbox \"7e7bef35b1631ef87532799afb96752eb8339a3663ea3c8423456b629d838c92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.058167 systemd[1]: run-netns-cni\x2d52bde74a\x2dbbc3\x2d34b0\x2dd646\x2d76e66a0a8a0b.mount: Deactivated successfully. Jan 28 06:56:57.071966 containerd[1638]: time="2026-01-28T06:56:57.071802186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-gv7nr,Uid:11f0b3ec-48a9-43d4-ba78-4be405b03a1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7bef35b1631ef87532799afb96752eb8339a3663ea3c8423456b629d838c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.073207 kubelet[2936]: E0128 06:56:57.072517 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7bef35b1631ef87532799afb96752eb8339a3663ea3c8423456b629d838c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.076825 kubelet[2936]: E0128 06:56:57.073259 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7bef35b1631ef87532799afb96752eb8339a3663ea3c8423456b629d838c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" Jan 28 06:56:57.076825 kubelet[2936]: E0128 06:56:57.073296 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7bef35b1631ef87532799afb96752eb8339a3663ea3c8423456b629d838c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" Jan 28 06:56:57.076825 kubelet[2936]: E0128 06:56:57.073390 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e7bef35b1631ef87532799afb96752eb8339a3663ea3c8423456b629d838c92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:56:57.088250 containerd[1638]: time="2026-01-28T06:56:57.088081513Z" level=error msg="Failed to destroy network for sandbox \"3e4f05928e85cbb3a581072f4a93a828e723dc56f9f00eabdba6756b2511e86b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.092030 containerd[1638]: time="2026-01-28T06:56:57.091746822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg29g,Uid:f8fa255f-5f12-4516-9e66-e8ceb04f1aa0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4f05928e85cbb3a581072f4a93a828e723dc56f9f00eabdba6756b2511e86b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.092899 systemd[1]: run-netns-cni\x2d6c45aaf4\x2d66ae\x2db8b0\x2d842c\x2de86f187e5408.mount: Deactivated successfully. Jan 28 06:56:57.097076 kubelet[2936]: E0128 06:56:57.096816 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4f05928e85cbb3a581072f4a93a828e723dc56f9f00eabdba6756b2511e86b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.097368 kubelet[2936]: E0128 06:56:57.097021 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4f05928e85cbb3a581072f4a93a828e723dc56f9f00eabdba6756b2511e86b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bg29g" Jan 28 06:56:57.097368 kubelet[2936]: E0128 06:56:57.097297 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4f05928e85cbb3a581072f4a93a828e723dc56f9f00eabdba6756b2511e86b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bg29g" Jan 28 06:56:57.097810 kubelet[2936]: E0128 06:56:57.097670 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bg29g_kube-system(f8fa255f-5f12-4516-9e66-e8ceb04f1aa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bg29g_kube-system(f8fa255f-5f12-4516-9e66-e8ceb04f1aa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e4f05928e85cbb3a581072f4a93a828e723dc56f9f00eabdba6756b2511e86b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bg29g" podUID="f8fa255f-5f12-4516-9e66-e8ceb04f1aa0" Jan 28 06:56:57.101974 containerd[1638]: time="2026-01-28T06:56:57.101898378Z" level=error msg="Failed to destroy network for sandbox \"5a9dd2f758ebb02e2699fce70fb486ac1d56b61b4dc2eadf343aeda1237fc731\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.106707 systemd[1]: run-netns-cni\x2d9881ea49\x2d7493\x2d8685\x2d58d4\x2df5df2d06fa3d.mount: Deactivated successfully. Jan 28 06:56:57.109677 containerd[1638]: time="2026-01-28T06:56:57.109590156Z" level=error msg="Failed to destroy network for sandbox \"82c18edf9997b2fa9013b6a00e94a93ca668c7f37deee90ac6fc4b997c9d2128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.110441 containerd[1638]: time="2026-01-28T06:56:57.110379252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74fbf496f-6pckk,Uid:c1237d78-1650-42b9-ac4f-842b943ada74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9dd2f758ebb02e2699fce70fb486ac1d56b61b4dc2eadf343aeda1237fc731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.111101 kubelet[2936]: E0128 06:56:57.110919 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9dd2f758ebb02e2699fce70fb486ac1d56b61b4dc2eadf343aeda1237fc731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.111101 kubelet[2936]: E0128 06:56:57.111016 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9dd2f758ebb02e2699fce70fb486ac1d56b61b4dc2eadf343aeda1237fc731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" Jan 28 06:56:57.111101 kubelet[2936]: E0128 06:56:57.111049 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9dd2f758ebb02e2699fce70fb486ac1d56b61b4dc2eadf343aeda1237fc731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" Jan 28 06:56:57.111810 kubelet[2936]: E0128 06:56:57.111351 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a9dd2f758ebb02e2699fce70fb486ac1d56b61b4dc2eadf343aeda1237fc731\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:56:57.116032 containerd[1638]: time="2026-01-28T06:56:57.115802453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9zxml,Uid:b3b161cf-39ef-4b57-bdfb-9046b0dd729b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c18edf9997b2fa9013b6a00e94a93ca668c7f37deee90ac6fc4b997c9d2128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.116801 kubelet[2936]: E0128 06:56:57.116743 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c18edf9997b2fa9013b6a00e94a93ca668c7f37deee90ac6fc4b997c9d2128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.116919 kubelet[2936]: E0128 06:56:57.116832 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c18edf9997b2fa9013b6a00e94a93ca668c7f37deee90ac6fc4b997c9d2128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:56:57.116919 kubelet[2936]: E0128 06:56:57.116874 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c18edf9997b2fa9013b6a00e94a93ca668c7f37deee90ac6fc4b997c9d2128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:56:57.117511 kubelet[2936]: E0128 06:56:57.116995 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82c18edf9997b2fa9013b6a00e94a93ca668c7f37deee90ac6fc4b997c9d2128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:56:57.118914 containerd[1638]: time="2026-01-28T06:56:57.118532562Z" level=error msg="Failed to destroy network for sandbox \"be10ba71cc9c52e9454088cc5d8ce04eeec5fb42b40417cd42e94d278a966d1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.124054 containerd[1638]: time="2026-01-28T06:56:57.123934377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-bbmbt,Uid:1023d10f-af49-4cbc-b6ed-d31a2d3bba42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10ba71cc9c52e9454088cc5d8ce04eeec5fb42b40417cd42e94d278a966d1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.124433 kubelet[2936]: E0128 06:56:57.124370 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10ba71cc9c52e9454088cc5d8ce04eeec5fb42b40417cd42e94d278a966d1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.125406 kubelet[2936]: E0128 06:56:57.124578 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10ba71cc9c52e9454088cc5d8ce04eeec5fb42b40417cd42e94d278a966d1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" Jan 28 06:56:57.125406 kubelet[2936]: E0128 06:56:57.124632 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be10ba71cc9c52e9454088cc5d8ce04eeec5fb42b40417cd42e94d278a966d1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" Jan 28 06:56:57.125406 kubelet[2936]: E0128 06:56:57.124731 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be10ba71cc9c52e9454088cc5d8ce04eeec5fb42b40417cd42e94d278a966d1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:56:57.190586 containerd[1638]: time="2026-01-28T06:56:57.190504375Z" level=error msg="Failed to destroy network for sandbox \"e5cf7125f3665144b8feaafa65769f2367aabc5f90e26f28768bc8814e405a0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.192536 containerd[1638]: time="2026-01-28T06:56:57.192441907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjw4,Uid:feae4718-ebbe-416f-b2aa-04c3e4a5379c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5cf7125f3665144b8feaafa65769f2367aabc5f90e26f28768bc8814e405a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.193014 kubelet[2936]: E0128 06:56:57.192909 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5cf7125f3665144b8feaafa65769f2367aabc5f90e26f28768bc8814e405a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:56:57.193317 kubelet[2936]: E0128 06:56:57.193052 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5cf7125f3665144b8feaafa65769f2367aabc5f90e26f28768bc8814e405a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:56:57.193317 kubelet[2936]: E0128 06:56:57.193100 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5cf7125f3665144b8feaafa65769f2367aabc5f90e26f28768bc8814e405a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:56:57.193317 kubelet[2936]: E0128 06:56:57.193187 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5cf7125f3665144b8feaafa65769f2367aabc5f90e26f28768bc8814e405a0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:56:58.018746 systemd[1]: run-netns-cni\x2d88809e14\x2d5d5b\x2dc4fe\x2d4c82\x2de23a318de170.mount: Deactivated successfully. Jan 28 06:56:58.019485 systemd[1]: run-netns-cni\x2dbc7a2513\x2df212\x2d084c\x2d8ede\x2df157fe21e02f.mount: Deactivated successfully. Jan 28 06:56:58.019724 systemd[1]: run-netns-cni\x2d02df5375\x2d1eb0\x2d1102\x2dec21\x2ddf3869d85476.mount: Deactivated successfully. Jan 28 06:57:06.546163 kubelet[2936]: I0128 06:57:06.546031 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:57:06.750000 audit[3868]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:06.768921 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 06:57:06.769104 kernel: audit: type=1325 audit(1769583426.750:578): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:06.769182 kernel: audit: type=1300 audit(1769583426.750:578): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff61c3d8b0 a2=0 a3=7fff61c3d89c items=0 ppid=3048 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:06.750000 audit[3868]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff61c3d8b0 a2=0 a3=7fff61c3d89c items=0 ppid=3048 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:06.772373 kernel: audit: type=1327 audit(1769583426.750:578): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:06.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:06.779000 audit[3868]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:06.785527 kernel: audit: type=1325 audit(1769583426.779:579): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:06.779000 audit[3868]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff61c3d8b0 a2=0 a3=7fff61c3d89c items=0 ppid=3048 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:06.793011 kernel: audit: type=1300 audit(1769583426.779:579): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff61c3d8b0 a2=0 a3=7fff61c3d89c items=0 ppid=3048 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:06.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:06.798977 kernel: audit: type=1327 audit(1769583426.779:579): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:07.977973 containerd[1638]: time="2026-01-28T06:57:07.977819413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9zxml,Uid:b3b161cf-39ef-4b57-bdfb-9046b0dd729b,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:08.296973 containerd[1638]: time="2026-01-28T06:57:08.295225414Z" level=error msg="Failed to destroy network for sandbox \"026e4cb0d83b55b0c19afafd56d960f4b3e9512346f336693378bc2c735a7fa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:08.299765 systemd[1]: run-netns-cni\x2d5bf82666\x2da3d5\x2dfcef\x2dbe63\x2d54f7089563a9.mount: Deactivated successfully. Jan 28 06:57:08.300822 containerd[1638]: time="2026-01-28T06:57:08.300762864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9zxml,Uid:b3b161cf-39ef-4b57-bdfb-9046b0dd729b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"026e4cb0d83b55b0c19afafd56d960f4b3e9512346f336693378bc2c735a7fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:08.304697 kubelet[2936]: E0128 06:57:08.304010 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"026e4cb0d83b55b0c19afafd56d960f4b3e9512346f336693378bc2c735a7fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:08.304697 kubelet[2936]: E0128 06:57:08.304110 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"026e4cb0d83b55b0c19afafd56d960f4b3e9512346f336693378bc2c735a7fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:57:08.304697 kubelet[2936]: E0128 06:57:08.304149 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"026e4cb0d83b55b0c19afafd56d960f4b3e9512346f336693378bc2c735a7fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9zxml" Jan 28 06:57:08.307006 kubelet[2936]: E0128 06:57:08.304238 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"026e4cb0d83b55b0c19afafd56d960f4b3e9512346f336693378bc2c735a7fa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:57:08.982115 containerd[1638]: time="2026-01-28T06:57:08.982032664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg29g,Uid:f8fa255f-5f12-4516-9e66-e8ceb04f1aa0,Namespace:kube-system,Attempt:0,}" Jan 28 06:57:08.996010 containerd[1638]: time="2026-01-28T06:57:08.993209792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74fbf496f-6pckk,Uid:c1237d78-1650-42b9-ac4f-842b943ada74,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:09.137278 containerd[1638]: time="2026-01-28T06:57:09.137206633Z" level=error msg="Failed to destroy network for sandbox \"2c93b9a0dc8d3bca0861568e70c3480712d0b61dfb36400b553459508656231c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:09.141482 systemd[1]: run-netns-cni\x2df69c24bf\x2de803\x2d3425\x2dde46\x2d3b564ce36645.mount: Deactivated successfully. Jan 28 06:57:09.144457 containerd[1638]: time="2026-01-28T06:57:09.142623852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg29g,Uid:f8fa255f-5f12-4516-9e66-e8ceb04f1aa0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c93b9a0dc8d3bca0861568e70c3480712d0b61dfb36400b553459508656231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:09.144632 kubelet[2936]: E0128 06:57:09.143838 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c93b9a0dc8d3bca0861568e70c3480712d0b61dfb36400b553459508656231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:09.144632 kubelet[2936]: E0128 06:57:09.143922 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c93b9a0dc8d3bca0861568e70c3480712d0b61dfb36400b553459508656231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bg29g" Jan 28 06:57:09.144632 kubelet[2936]: E0128 06:57:09.143993 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c93b9a0dc8d3bca0861568e70c3480712d0b61dfb36400b553459508656231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bg29g" Jan 28 06:57:09.146609 kubelet[2936]: E0128 06:57:09.144099 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bg29g_kube-system(f8fa255f-5f12-4516-9e66-e8ceb04f1aa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bg29g_kube-system(f8fa255f-5f12-4516-9e66-e8ceb04f1aa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c93b9a0dc8d3bca0861568e70c3480712d0b61dfb36400b553459508656231c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bg29g" podUID="f8fa255f-5f12-4516-9e66-e8ceb04f1aa0" Jan 28 06:57:09.222529 containerd[1638]: time="2026-01-28T06:57:09.222416196Z" level=error msg="Failed to destroy network for sandbox \"4247625faff24fa32976760691bad0220d3ca76048d92d74018e6a6641acda29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:09.226918 systemd[1]: run-netns-cni\x2d6f1eb44a\x2d6349\x2d318a\x2d2211\x2d9928c147ec5a.mount: Deactivated successfully. Jan 28 06:57:09.231545 containerd[1638]: time="2026-01-28T06:57:09.231359040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74fbf496f-6pckk,Uid:c1237d78-1650-42b9-ac4f-842b943ada74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4247625faff24fa32976760691bad0220d3ca76048d92d74018e6a6641acda29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:09.231920 kubelet[2936]: E0128 06:57:09.231788 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4247625faff24fa32976760691bad0220d3ca76048d92d74018e6a6641acda29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:09.231920 kubelet[2936]: E0128 06:57:09.231887 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4247625faff24fa32976760691bad0220d3ca76048d92d74018e6a6641acda29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" Jan 28 06:57:09.235039 kubelet[2936]: E0128 06:57:09.231920 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4247625faff24fa32976760691bad0220d3ca76048d92d74018e6a6641acda29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" Jan 28 06:57:09.235039 kubelet[2936]: E0128 06:57:09.232028 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4247625faff24fa32976760691bad0220d3ca76048d92d74018e6a6641acda29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:57:09.977084 containerd[1638]: time="2026-01-28T06:57:09.977022090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-gv7nr,Uid:11f0b3ec-48a9-43d4-ba78-4be405b03a1e,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:57:10.125575 containerd[1638]: time="2026-01-28T06:57:10.125496025Z" level=error msg="Failed to destroy network for sandbox \"a7d0fd5959a7e7a156fff48de0204caf7434aaf263e9f253f1401bea0960bff4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:10.130783 systemd[1]: run-netns-cni\x2dca18e310\x2d5a36\x2d5156\x2dd765\x2d785534d1cf62.mount: Deactivated successfully. Jan 28 06:57:10.141009 containerd[1638]: time="2026-01-28T06:57:10.140800498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-gv7nr,Uid:11f0b3ec-48a9-43d4-ba78-4be405b03a1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0fd5959a7e7a156fff48de0204caf7434aaf263e9f253f1401bea0960bff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:10.142170 kubelet[2936]: E0128 06:57:10.142025 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0fd5959a7e7a156fff48de0204caf7434aaf263e9f253f1401bea0960bff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:10.142170 kubelet[2936]: E0128 06:57:10.142128 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0fd5959a7e7a156fff48de0204caf7434aaf263e9f253f1401bea0960bff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" Jan 28 06:57:10.143422 kubelet[2936]: E0128 06:57:10.142474 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0fd5959a7e7a156fff48de0204caf7434aaf263e9f253f1401bea0960bff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" Jan 28 06:57:10.143422 kubelet[2936]: E0128 06:57:10.142591 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7d0fd5959a7e7a156fff48de0204caf7434aaf263e9f253f1401bea0960bff4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:57:11.253688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862004065.mount: Deactivated successfully. Jan 28 06:57:11.328556 containerd[1638]: time="2026-01-28T06:57:11.328466722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:57:11.362654 containerd[1638]: time="2026-01-28T06:57:11.362184495Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:57:11.363444 containerd[1638]: time="2026-01-28T06:57:11.363400468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 06:57:11.365126 containerd[1638]: time="2026-01-28T06:57:11.365091079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:57:11.366317 containerd[1638]: time="2026-01-28T06:57:11.366276760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 15.126155281s" Jan 28 06:57:11.366418 containerd[1638]: time="2026-01-28T06:57:11.366336017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 06:57:11.434736 containerd[1638]: time="2026-01-28T06:57:11.434670199Z" level=info msg="CreateContainer within sandbox \"754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 06:57:11.534208 containerd[1638]: time="2026-01-28T06:57:11.533172721Z" level=info msg="Container 0a6b38ef1b0bfd1304cd09f1f395c5c69e6c938c9b2d5eefa27c51041fc0857e: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:57:11.535296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451445185.mount: Deactivated successfully. Jan 28 06:57:11.590692 containerd[1638]: time="2026-01-28T06:57:11.590574487Z" level=info msg="CreateContainer within sandbox \"754694bda7124e1d488edc74187acf6256e42ccbc1107eaf0a14f646d62d817c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0a6b38ef1b0bfd1304cd09f1f395c5c69e6c938c9b2d5eefa27c51041fc0857e\"" Jan 28 06:57:11.594702 containerd[1638]: time="2026-01-28T06:57:11.594144512Z" level=info msg="StartContainer for \"0a6b38ef1b0bfd1304cd09f1f395c5c69e6c938c9b2d5eefa27c51041fc0857e\"" Jan 28 06:57:11.619752 containerd[1638]: time="2026-01-28T06:57:11.619685044Z" level=info msg="connecting to shim 0a6b38ef1b0bfd1304cd09f1f395c5c69e6c938c9b2d5eefa27c51041fc0857e" address="unix:///run/containerd/s/35f3d247e14a96f14cc649f5841db95363f225ad5a023c0cc04d1f23ec983ec9" protocol=ttrpc version=3 Jan 28 06:57:11.776688 systemd[1]: Started cri-containerd-0a6b38ef1b0bfd1304cd09f1f395c5c69e6c938c9b2d5eefa27c51041fc0857e.scope - libcontainer container 0a6b38ef1b0bfd1304cd09f1f395c5c69e6c938c9b2d5eefa27c51041fc0857e. Jan 28 06:57:11.868000 audit: BPF prog-id=176 op=LOAD Jan 28 06:57:11.879372 kernel: audit: type=1334 audit(1769583431.868:580): prog-id=176 op=LOAD Jan 28 06:57:11.868000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.887170 kernel: audit: type=1300 audit(1769583431.868:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.893016 kernel: audit: type=1327 audit(1769583431.868:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.876000 audit: BPF prog-id=177 op=LOAD Jan 28 06:57:11.876000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.899911 kernel: audit: type=1334 audit(1769583431.876:581): prog-id=177 op=LOAD Jan 28 06:57:11.900203 kernel: audit: type=1300 audit(1769583431.876:581): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.905461 kernel: audit: type=1327 audit(1769583431.876:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.909028 kernel: audit: type=1334 audit(1769583431.876:582): prog-id=177 op=UNLOAD Jan 28 06:57:11.876000 audit: BPF prog-id=177 op=UNLOAD Jan 28 06:57:11.876000 audit[3971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.918931 kernel: audit: type=1300 audit(1769583431.876:582): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.919239 kernel: audit: type=1327 audit(1769583431.876:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.876000 audit: BPF prog-id=176 op=UNLOAD Jan 28 06:57:11.923988 kernel: audit: type=1334 audit(1769583431.876:583): prog-id=176 op=UNLOAD Jan 28 06:57:11.876000 audit[3971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.877000 audit: BPF prog-id=178 op=LOAD Jan 28 06:57:11.877000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3393 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:11.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061366233386566316230626664313330346364303966316633393563 Jan 28 06:57:11.967790 containerd[1638]: time="2026-01-28T06:57:11.967676471Z" level=info msg="StartContainer for \"0a6b38ef1b0bfd1304cd09f1f395c5c69e6c938c9b2d5eefa27c51041fc0857e\" returns successfully" Jan 28 06:57:11.979862 containerd[1638]: time="2026-01-28T06:57:11.979510962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjw4,Uid:feae4718-ebbe-416f-b2aa-04c3e4a5379c,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:11.979862 containerd[1638]: time="2026-01-28T06:57:11.979688306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b984fd56f-sfgfj,Uid:422c5240-419b-4316-b493-4cac140b34f1,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:11.979862 containerd[1638]: time="2026-01-28T06:57:11.979804402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9g7w,Uid:48297015-9ea4-408f-b23a-bc18759d877d,Namespace:kube-system,Attempt:0,}" Jan 28 06:57:11.981205 containerd[1638]: time="2026-01-28T06:57:11.981174006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-bbmbt,Uid:1023d10f-af49-4cbc-b6ed-d31a2d3bba42,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:57:12.257081 containerd[1638]: time="2026-01-28T06:57:12.256999858Z" level=error msg="Failed to destroy network for sandbox \"89fba120df3248546616cb879c3a8ffd5d29fe782b56ffe1ac42656500091c03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.273412 systemd[1]: run-netns-cni\x2d26f9216b\x2da0db\x2d04f3\x2d150f\x2d59db0e649950.mount: Deactivated successfully. Jan 28 06:57:12.274610 containerd[1638]: time="2026-01-28T06:57:12.274425803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjw4,Uid:feae4718-ebbe-416f-b2aa-04c3e4a5379c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fba120df3248546616cb879c3a8ffd5d29fe782b56ffe1ac42656500091c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.275753 kubelet[2936]: E0128 06:57:12.274886 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fba120df3248546616cb879c3a8ffd5d29fe782b56ffe1ac42656500091c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.275753 kubelet[2936]: E0128 06:57:12.275185 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fba120df3248546616cb879c3a8ffd5d29fe782b56ffe1ac42656500091c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:57:12.275753 kubelet[2936]: E0128 06:57:12.275250 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fba120df3248546616cb879c3a8ffd5d29fe782b56ffe1ac42656500091c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hjw4" Jan 28 06:57:12.281405 kubelet[2936]: E0128 06:57:12.281302 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89fba120df3248546616cb879c3a8ffd5d29fe782b56ffe1ac42656500091c03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:57:12.311877 containerd[1638]: time="2026-01-28T06:57:12.311784955Z" level=error msg="Failed to destroy network for sandbox \"1b47cb8a77543b9b3c4c92d57381b07ac84498e793aedaa94bb3d01dc377cdfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.316087 systemd[1]: run-netns-cni\x2d9e174cd6\x2d039e\x2dcb0d\x2d31d0\x2dc83a2d004b1f.mount: Deactivated successfully. Jan 28 06:57:12.321040 containerd[1638]: time="2026-01-28T06:57:12.320844238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b984fd56f-sfgfj,Uid:422c5240-419b-4316-b493-4cac140b34f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b47cb8a77543b9b3c4c92d57381b07ac84498e793aedaa94bb3d01dc377cdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.321205 containerd[1638]: time="2026-01-28T06:57:12.321155104Z" level=error msg="Failed to destroy network for sandbox \"6d0551f2d62dfb572e5a30ae391a95f52a7a9329756343d5b79e478a038d2570\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.323974 kubelet[2936]: E0128 06:57:12.322316 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b47cb8a77543b9b3c4c92d57381b07ac84498e793aedaa94bb3d01dc377cdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.323974 kubelet[2936]: E0128 06:57:12.322411 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b47cb8a77543b9b3c4c92d57381b07ac84498e793aedaa94bb3d01dc377cdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b984fd56f-sfgfj" Jan 28 06:57:12.323974 kubelet[2936]: E0128 06:57:12.322449 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b47cb8a77543b9b3c4c92d57381b07ac84498e793aedaa94bb3d01dc377cdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b984fd56f-sfgfj" Jan 28 06:57:12.324187 kubelet[2936]: E0128 06:57:12.322529 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b984fd56f-sfgfj_calico-system(422c5240-419b-4316-b493-4cac140b34f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b984fd56f-sfgfj_calico-system(422c5240-419b-4316-b493-4cac140b34f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b47cb8a77543b9b3c4c92d57381b07ac84498e793aedaa94bb3d01dc377cdfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b984fd56f-sfgfj" podUID="422c5240-419b-4316-b493-4cac140b34f1" Jan 28 06:57:12.325075 systemd[1]: run-netns-cni\x2d3ad5f198\x2d32a2\x2d023e\x2ddd10\x2d469ba1f48104.mount: Deactivated successfully. Jan 28 06:57:12.331845 containerd[1638]: time="2026-01-28T06:57:12.331311043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9g7w,Uid:48297015-9ea4-408f-b23a-bc18759d877d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0551f2d62dfb572e5a30ae391a95f52a7a9329756343d5b79e478a038d2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.332656 kubelet[2936]: E0128 06:57:12.332457 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0551f2d62dfb572e5a30ae391a95f52a7a9329756343d5b79e478a038d2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.332656 kubelet[2936]: E0128 06:57:12.332528 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0551f2d62dfb572e5a30ae391a95f52a7a9329756343d5b79e478a038d2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l9g7w" Jan 28 06:57:12.332656 kubelet[2936]: E0128 06:57:12.332559 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0551f2d62dfb572e5a30ae391a95f52a7a9329756343d5b79e478a038d2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l9g7w" Jan 28 06:57:12.333727 kubelet[2936]: E0128 06:57:12.332645 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l9g7w_kube-system(48297015-9ea4-408f-b23a-bc18759d877d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l9g7w_kube-system(48297015-9ea4-408f-b23a-bc18759d877d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0551f2d62dfb572e5a30ae391a95f52a7a9329756343d5b79e478a038d2570\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l9g7w" podUID="48297015-9ea4-408f-b23a-bc18759d877d" Jan 28 06:57:12.347135 containerd[1638]: time="2026-01-28T06:57:12.345444502Z" level=error msg="Failed to destroy network for sandbox \"cdb2842e6a3ce86c7e4d8c0ac6d1263330892750f0c9656db15200d42a95c169\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.351492 systemd[1]: run-netns-cni\x2de9d63fe8\x2dc6c7\x2d99cc\x2dd9be\x2d24acc304c3b3.mount: Deactivated successfully. Jan 28 06:57:12.356893 containerd[1638]: time="2026-01-28T06:57:12.356712907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-bbmbt,Uid:1023d10f-af49-4cbc-b6ed-d31a2d3bba42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb2842e6a3ce86c7e4d8c0ac6d1263330892750f0c9656db15200d42a95c169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.357210 kubelet[2936]: E0128 06:57:12.357141 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb2842e6a3ce86c7e4d8c0ac6d1263330892750f0c9656db15200d42a95c169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:57:12.358448 kubelet[2936]: E0128 06:57:12.357240 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb2842e6a3ce86c7e4d8c0ac6d1263330892750f0c9656db15200d42a95c169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" Jan 28 06:57:12.358448 kubelet[2936]: E0128 06:57:12.357285 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb2842e6a3ce86c7e4d8c0ac6d1263330892750f0c9656db15200d42a95c169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" Jan 28 06:57:12.358448 kubelet[2936]: E0128 06:57:12.357368 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdb2842e6a3ce86c7e4d8c0ac6d1263330892750f0c9656db15200d42a95c169\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:57:12.474061 kubelet[2936]: I0128 06:57:12.470725 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9jfsj" podStartSLOduration=2.252732488 podStartE2EDuration="32.459316873s" podCreationTimestamp="2026-01-28 06:56:40 +0000 UTC" firstStartedPulling="2026-01-28 06:56:41.160835843 +0000 UTC m=+26.397493360" lastFinishedPulling="2026-01-28 06:57:11.367420228 +0000 UTC m=+56.604077745" observedRunningTime="2026-01-28 06:57:12.435066611 +0000 UTC m=+57.671724152" watchObservedRunningTime="2026-01-28 06:57:12.459316873 +0000 UTC m=+57.695974398" Jan 28 06:57:12.573416 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 06:57:12.573660 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 06:57:13.483882 kubelet[2936]: I0128 06:57:13.483809 2936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422c5240-419b-4316-b493-4cac140b34f1-whisker-ca-bundle\") pod \"422c5240-419b-4316-b493-4cac140b34f1\" (UID: \"422c5240-419b-4316-b493-4cac140b34f1\") " Jan 28 06:57:13.484708 kubelet[2936]: I0128 06:57:13.483911 2936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zfff\" (UniqueName: \"kubernetes.io/projected/422c5240-419b-4316-b493-4cac140b34f1-kube-api-access-2zfff\") pod \"422c5240-419b-4316-b493-4cac140b34f1\" (UID: \"422c5240-419b-4316-b493-4cac140b34f1\") " Jan 28 06:57:13.484708 kubelet[2936]: I0128 06:57:13.483976 2936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/422c5240-419b-4316-b493-4cac140b34f1-whisker-backend-key-pair\") pod \"422c5240-419b-4316-b493-4cac140b34f1\" (UID: \"422c5240-419b-4316-b493-4cac140b34f1\") " Jan 28 06:57:13.490977 kubelet[2936]: I0128 06:57:13.487680 2936 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/422c5240-419b-4316-b493-4cac140b34f1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "422c5240-419b-4316-b493-4cac140b34f1" (UID: "422c5240-419b-4316-b493-4cac140b34f1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 06:57:13.499345 systemd[1]: var-lib-kubelet-pods-422c5240\x2d419b\x2d4316\x2db493\x2d4cac140b34f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2zfff.mount: Deactivated successfully. Jan 28 06:57:13.504133 kubelet[2936]: I0128 06:57:13.502183 2936 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422c5240-419b-4316-b493-4cac140b34f1-kube-api-access-2zfff" (OuterVolumeSpecName: "kube-api-access-2zfff") pod "422c5240-419b-4316-b493-4cac140b34f1" (UID: "422c5240-419b-4316-b493-4cac140b34f1"). InnerVolumeSpecName "kube-api-access-2zfff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 06:57:13.506343 kubelet[2936]: I0128 06:57:13.505136 2936 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422c5240-419b-4316-b493-4cac140b34f1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "422c5240-419b-4316-b493-4cac140b34f1" (UID: "422c5240-419b-4316-b493-4cac140b34f1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 06:57:13.507562 systemd[1]: var-lib-kubelet-pods-422c5240\x2d419b\x2d4316\x2db493\x2d4cac140b34f1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 06:57:13.584727 kubelet[2936]: I0128 06:57:13.584584 2936 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zfff\" (UniqueName: \"kubernetes.io/projected/422c5240-419b-4316-b493-4cac140b34f1-kube-api-access-2zfff\") on node \"srv-gf17r.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:57:13.584727 kubelet[2936]: I0128 06:57:13.584655 2936 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/422c5240-419b-4316-b493-4cac140b34f1-whisker-backend-key-pair\") on node \"srv-gf17r.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:57:13.584727 kubelet[2936]: I0128 06:57:13.584678 2936 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/422c5240-419b-4316-b493-4cac140b34f1-whisker-ca-bundle\") on node \"srv-gf17r.gb1.brightbox.com\" DevicePath \"\"" Jan 28 06:57:14.394418 systemd[1]: Removed slice kubepods-besteffort-pod422c5240_419b_4316_b493_4cac140b34f1.slice - libcontainer container kubepods-besteffort-pod422c5240_419b_4316_b493_4cac140b34f1.slice. Jan 28 06:57:14.612984 kubelet[2936]: I0128 06:57:14.612892 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7420162d-3e1e-4922-8cd8-19db25c1125f-whisker-backend-key-pair\") pod \"whisker-5457cf895c-z6ltp\" (UID: \"7420162d-3e1e-4922-8cd8-19db25c1125f\") " pod="calico-system/whisker-5457cf895c-z6ltp" Jan 28 06:57:14.615766 kubelet[2936]: I0128 06:57:14.614549 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7420162d-3e1e-4922-8cd8-19db25c1125f-whisker-ca-bundle\") pod \"whisker-5457cf895c-z6ltp\" (UID: \"7420162d-3e1e-4922-8cd8-19db25c1125f\") " pod="calico-system/whisker-5457cf895c-z6ltp" Jan 28 06:57:14.615766 kubelet[2936]: I0128 06:57:14.614630 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjhfv\" (UniqueName: \"kubernetes.io/projected/7420162d-3e1e-4922-8cd8-19db25c1125f-kube-api-access-cjhfv\") pod \"whisker-5457cf895c-z6ltp\" (UID: \"7420162d-3e1e-4922-8cd8-19db25c1125f\") " pod="calico-system/whisker-5457cf895c-z6ltp" Jan 28 06:57:14.632647 systemd[1]: Created slice kubepods-besteffort-pod7420162d_3e1e_4922_8cd8_19db25c1125f.slice - libcontainer container kubepods-besteffort-pod7420162d_3e1e_4922_8cd8_19db25c1125f.slice. Jan 28 06:57:14.943723 containerd[1638]: time="2026-01-28T06:57:14.943343231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5457cf895c-z6ltp,Uid:7420162d-3e1e-4922-8cd8-19db25c1125f,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:15.015296 kubelet[2936]: I0128 06:57:15.014700 2936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422c5240-419b-4316-b493-4cac140b34f1" path="/var/lib/kubelet/pods/422c5240-419b-4316-b493-4cac140b34f1/volumes" Jan 28 06:57:15.163000 audit: BPF prog-id=179 op=LOAD Jan 28 06:57:15.163000 audit[4351]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdf922790 a2=98 a3=1fffffffffffffff items=0 ppid=4216 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:57:15.163000 audit: BPF prog-id=179 op=UNLOAD Jan 28 06:57:15.163000 audit[4351]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcdf922760 a3=0 items=0 ppid=4216 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:57:15.164000 audit: BPF prog-id=180 op=LOAD Jan 28 06:57:15.164000 audit[4351]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdf922670 a2=94 a3=3 items=0 ppid=4216 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.164000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:57:15.164000 audit: BPF prog-id=180 op=UNLOAD Jan 28 06:57:15.164000 audit[4351]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcdf922670 a2=94 a3=3 items=0 ppid=4216 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.164000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:57:15.164000 audit: BPF prog-id=181 op=LOAD Jan 28 06:57:15.164000 audit[4351]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdf9226b0 a2=94 a3=7ffcdf922890 items=0 ppid=4216 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.164000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:57:15.164000 audit: BPF prog-id=181 op=UNLOAD Jan 28 06:57:15.164000 audit[4351]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcdf9226b0 a2=94 a3=7ffcdf922890 items=0 ppid=4216 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.164000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:57:15.169000 audit: BPF prog-id=182 op=LOAD Jan 28 06:57:15.169000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd174f9890 a2=98 a3=3 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.169000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.169000 audit: BPF prog-id=182 op=UNLOAD Jan 28 06:57:15.169000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd174f9860 a3=0 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.169000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.170000 audit: BPF prog-id=183 op=LOAD Jan 28 06:57:15.170000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd174f9680 a2=94 a3=54428f items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.170000 audit: BPF prog-id=183 op=UNLOAD Jan 28 06:57:15.170000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd174f9680 a2=94 a3=54428f items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.170000 audit: BPF prog-id=184 op=LOAD Jan 28 06:57:15.170000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd174f96b0 a2=94 a3=2 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.170000 audit: BPF prog-id=184 op=UNLOAD Jan 28 06:57:15.170000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd174f96b0 a2=0 a3=2 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.445838 systemd-networkd[1540]: cali9e4b7d97ec5: Link UP Jan 28 06:57:15.457587 systemd-networkd[1540]: cali9e4b7d97ec5: Gained carrier Jan 28 06:57:15.495475 containerd[1638]: 2026-01-28 06:57:15.115 [INFO][4306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0 whisker-5457cf895c- calico-system 7420162d-3e1e-4922-8cd8-19db25c1125f 970 0 2026-01-28 06:57:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5457cf895c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com whisker-5457cf895c-z6ltp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9e4b7d97ec5 [] [] }} ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-" Jan 28 06:57:15.495475 containerd[1638]: 2026-01-28 06:57:15.116 [INFO][4306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" Jan 28 06:57:15.495475 containerd[1638]: 2026-01-28 06:57:15.336 [INFO][4337] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" HandleID="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Workload="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" Jan 28 06:57:15.496000 audit: BPF prog-id=185 op=LOAD Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.338 [INFO][4337] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" HandleID="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Workload="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gf17r.gb1.brightbox.com", "pod":"whisker-5457cf895c-z6ltp", "timestamp":"2026-01-28 06:57:15.336026008 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.338 [INFO][4337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.339 [INFO][4337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.339 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.365 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.378 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.389 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.392 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.497829 containerd[1638]: 2026-01-28 06:57:15.395 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.498305 containerd[1638]: 2026-01-28 06:57:15.395 [INFO][4337] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.498305 containerd[1638]: 2026-01-28 06:57:15.399 [INFO][4337] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780 Jan 28 06:57:15.498305 containerd[1638]: 2026-01-28 06:57:15.407 [INFO][4337] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.498305 containerd[1638]: 2026-01-28 06:57:15.419 [INFO][4337] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.193/26] block=192.168.38.192/26 handle="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.498305 containerd[1638]: 2026-01-28 06:57:15.419 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.193/26] handle="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:15.498305 containerd[1638]: 2026-01-28 06:57:15.419 [INFO][4337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:15.498305 containerd[1638]: 2026-01-28 06:57:15.419 [INFO][4337] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.193/26] IPv6=[] ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" HandleID="k8s-pod-network.57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Workload="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" Jan 28 06:57:15.496000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd174f9570 a2=94 a3=1 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.496000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.499000 audit: BPF prog-id=185 op=UNLOAD Jan 28 06:57:15.499000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd174f9570 a2=94 a3=1 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.499000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.504522 containerd[1638]: 2026-01-28 06:57:15.424 [INFO][4306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0", GenerateName:"whisker-5457cf895c-", Namespace:"calico-system", SelfLink:"", UID:"7420162d-3e1e-4922-8cd8-19db25c1125f", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 57, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5457cf895c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5457cf895c-z6ltp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9e4b7d97ec5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:15.504522 containerd[1638]: 2026-01-28 06:57:15.425 [INFO][4306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.193/32] ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" Jan 28 06:57:15.504717 containerd[1638]: 2026-01-28 06:57:15.425 [INFO][4306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e4b7d97ec5 ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" Jan 28 06:57:15.504717 containerd[1638]: 2026-01-28 06:57:15.452 [INFO][4306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" Jan 28 06:57:15.504823 containerd[1638]: 2026-01-28 06:57:15.455 [INFO][4306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0", GenerateName:"whisker-5457cf895c-", Namespace:"calico-system", SelfLink:"", UID:"7420162d-3e1e-4922-8cd8-19db25c1125f", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 57, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5457cf895c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780", Pod:"whisker-5457cf895c-z6ltp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9e4b7d97ec5", MAC:"66:55:f6:f7:42:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:15.504933 containerd[1638]: 2026-01-28 06:57:15.486 [INFO][4306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" Namespace="calico-system" Pod="whisker-5457cf895c-z6ltp" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-whisker--5457cf895c--z6ltp-eth0" Jan 28 06:57:15.528000 audit: BPF prog-id=186 op=LOAD Jan 28 06:57:15.528000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd174f9560 a2=94 a3=4 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.528000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.528000 audit: BPF prog-id=186 op=UNLOAD Jan 28 06:57:15.528000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd174f9560 a2=0 a3=4 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.528000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.528000 audit: BPF prog-id=187 op=LOAD Jan 28 06:57:15.528000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd174f93c0 a2=94 a3=5 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.528000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.529000 audit: BPF prog-id=187 op=UNLOAD Jan 28 06:57:15.529000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd174f93c0 a2=0 a3=5 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.529000 audit: BPF prog-id=188 op=LOAD Jan 28 06:57:15.529000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd174f95e0 a2=94 a3=6 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.529000 audit: BPF prog-id=188 op=UNLOAD Jan 28 06:57:15.529000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd174f95e0 a2=0 a3=6 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.529000 audit: BPF prog-id=189 op=LOAD Jan 28 06:57:15.529000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd174f8d90 a2=94 a3=88 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.530000 audit: BPF prog-id=190 op=LOAD Jan 28 06:57:15.530000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd174f8c10 a2=94 a3=2 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.530000 audit: BPF prog-id=190 op=UNLOAD Jan 28 06:57:15.530000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd174f8c40 a2=0 a3=7ffd174f8d40 items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.530000 audit: BPF prog-id=189 op=UNLOAD Jan 28 06:57:15.530000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=f16fd10 a2=0 a3=701c09a92ff790cc items=0 ppid=4216 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:57:15.549000 audit: BPF prog-id=191 op=LOAD Jan 28 06:57:15.549000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff960be3b0 a2=98 a3=1999999999999999 items=0 ppid=4216 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.549000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:57:15.550000 audit: BPF prog-id=191 op=UNLOAD Jan 28 06:57:15.550000 audit[4368]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff960be380 a3=0 items=0 ppid=4216 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.550000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:57:15.550000 audit: BPF prog-id=192 op=LOAD Jan 28 06:57:15.550000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff960be290 a2=94 a3=ffff items=0 ppid=4216 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.550000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:57:15.551000 audit: BPF prog-id=192 op=UNLOAD Jan 28 06:57:15.551000 audit[4368]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff960be290 a2=94 a3=ffff items=0 ppid=4216 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.551000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:57:15.551000 audit: BPF prog-id=193 op=LOAD Jan 28 06:57:15.551000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff960be2d0 a2=94 a3=7fff960be4b0 items=0 ppid=4216 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.551000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:57:15.551000 audit: BPF prog-id=193 op=UNLOAD Jan 28 06:57:15.551000 audit[4368]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff960be2d0 a2=94 a3=7fff960be4b0 items=0 ppid=4216 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.551000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:57:15.715241 systemd-networkd[1540]: vxlan.calico: Link UP Jan 28 06:57:15.715256 systemd-networkd[1540]: vxlan.calico: Gained carrier Jan 28 06:57:15.739981 containerd[1638]: time="2026-01-28T06:57:15.739495974Z" level=info msg="connecting to shim 57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780" address="unix:///run/containerd/s/48c426e89ca4bfd0cdcbc4ca8d29f010804ecc4d440cce7e37b7391c2bf4d6a8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:15.824000 audit: BPF prog-id=194 op=LOAD Jan 28 06:57:15.824000 audit[4425]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd00504170 a2=98 a3=20 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.824000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.825000 audit: BPF prog-id=194 op=UNLOAD Jan 28 06:57:15.825000 audit[4425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd00504140 a3=0 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.825000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.826000 audit: BPF prog-id=195 op=LOAD Jan 28 06:57:15.826000 audit[4425]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd00503f80 a2=94 a3=54428f items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.826000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.827000 audit: BPF prog-id=195 op=UNLOAD Jan 28 06:57:15.827000 audit[4425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd00503f80 a2=94 a3=54428f items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.827000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.827000 audit: BPF prog-id=196 op=LOAD Jan 28 06:57:15.827000 audit[4425]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd00503fb0 a2=94 a3=2 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.827000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.827000 audit: BPF prog-id=196 op=UNLOAD Jan 28 06:57:15.827000 audit[4425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd00503fb0 a2=0 a3=2 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.827000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.827000 audit: BPF prog-id=197 op=LOAD Jan 28 06:57:15.827000 audit[4425]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd00503d60 a2=94 a3=4 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.827000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.828000 audit: BPF prog-id=197 op=UNLOAD Jan 28 06:57:15.828000 audit[4425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd00503d60 a2=94 a3=4 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.828000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.828000 audit: BPF prog-id=198 op=LOAD Jan 28 06:57:15.828000 audit[4425]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd00503e60 a2=94 a3=7ffd00503fe0 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.828000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.828000 audit: BPF prog-id=198 op=UNLOAD Jan 28 06:57:15.828000 audit[4425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd00503e60 a2=0 a3=7ffd00503fe0 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.828000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.825333 systemd[1]: Started cri-containerd-57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780.scope - libcontainer container 57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780. Jan 28 06:57:15.840000 audit: BPF prog-id=199 op=LOAD Jan 28 06:57:15.840000 audit[4425]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd00503590 a2=94 a3=2 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.840000 audit: BPF prog-id=199 op=UNLOAD Jan 28 06:57:15.840000 audit[4425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd00503590 a2=0 a3=2 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.840000 audit: BPF prog-id=200 op=LOAD Jan 28 06:57:15.840000 audit[4425]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd00503690 a2=94 a3=30 items=0 ppid=4216 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.840000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:57:15.870000 audit: BPF prog-id=201 op=LOAD Jan 28 06:57:15.870000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdde5c1d0 a2=98 a3=0 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:15.871000 audit: BPF prog-id=201 op=UNLOAD Jan 28 06:57:15.871000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcdde5c1a0 a3=0 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:15.871000 audit: BPF prog-id=202 op=LOAD Jan 28 06:57:15.871000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcdde5bfc0 a2=94 a3=54428f items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:15.871000 audit: BPF prog-id=202 op=UNLOAD Jan 28 06:57:15.871000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcdde5bfc0 a2=94 a3=54428f items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:15.871000 audit: BPF prog-id=203 op=LOAD Jan 28 06:57:15.871000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcdde5bff0 a2=94 a3=2 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:15.871000 audit: BPF prog-id=203 op=UNLOAD Jan 28 06:57:15.871000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcdde5bff0 a2=0 a3=2 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:15.890000 audit: BPF prog-id=204 op=LOAD Jan 28 06:57:15.899000 audit: BPF prog-id=205 op=LOAD Jan 28 06:57:15.899000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4394 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643939616235313261343735353130323030313837383130323131 Jan 28 06:57:15.901000 audit: BPF prog-id=205 op=UNLOAD Jan 28 06:57:15.901000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643939616235313261343735353130323030313837383130323131 Jan 28 06:57:15.901000 audit: BPF prog-id=206 op=LOAD Jan 28 06:57:15.901000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4394 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643939616235313261343735353130323030313837383130323131 Jan 28 06:57:15.901000 audit: BPF prog-id=207 op=LOAD Jan 28 06:57:15.901000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4394 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643939616235313261343735353130323030313837383130323131 Jan 28 06:57:15.901000 audit: BPF prog-id=207 op=UNLOAD Jan 28 06:57:15.901000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643939616235313261343735353130323030313837383130323131 Jan 28 06:57:15.901000 audit: BPF prog-id=206 op=UNLOAD Jan 28 06:57:15.901000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643939616235313261343735353130323030313837383130323131 Jan 28 06:57:15.901000 audit: BPF prog-id=208 op=LOAD Jan 28 06:57:15.901000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4394 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:15.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643939616235313261343735353130323030313837383130323131 Jan 28 06:57:16.002320 containerd[1638]: time="2026-01-28T06:57:16.002259240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5457cf895c-z6ltp,Uid:7420162d-3e1e-4922-8cd8-19db25c1125f,Namespace:calico-system,Attempt:0,} returns sandbox id \"57d99ab512a4755102001878102119b0eff5c294d6c43835b6cd79fcefed5780\"" Jan 28 06:57:16.018895 containerd[1638]: time="2026-01-28T06:57:16.018835834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:57:16.178000 audit: BPF prog-id=209 op=LOAD Jan 28 06:57:16.178000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcdde5beb0 a2=94 a3=1 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.178000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.178000 audit: BPF prog-id=209 op=UNLOAD Jan 28 06:57:16.178000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcdde5beb0 a2=94 a3=1 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.178000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.194000 audit: BPF prog-id=210 op=LOAD Jan 28 06:57:16.194000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcdde5bea0 a2=94 a3=4 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.194000 audit: BPF prog-id=210 op=UNLOAD Jan 28 06:57:16.194000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcdde5bea0 a2=0 a3=4 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.194000 audit: BPF prog-id=211 op=LOAD Jan 28 06:57:16.194000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcdde5bd00 a2=94 a3=5 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.194000 audit: BPF prog-id=211 op=UNLOAD Jan 28 06:57:16.194000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcdde5bd00 a2=0 a3=5 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.194000 audit: BPF prog-id=212 op=LOAD Jan 28 06:57:16.194000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcdde5bf20 a2=94 a3=6 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.195000 audit: BPF prog-id=212 op=UNLOAD Jan 28 06:57:16.195000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcdde5bf20 a2=0 a3=6 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.195000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.195000 audit: BPF prog-id=213 op=LOAD Jan 28 06:57:16.195000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcdde5b6d0 a2=94 a3=88 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.195000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.195000 audit: BPF prog-id=214 op=LOAD Jan 28 06:57:16.195000 audit[4438]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcdde5b550 a2=94 a3=2 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.195000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.195000 audit: BPF prog-id=214 op=UNLOAD Jan 28 06:57:16.195000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcdde5b580 a2=0 a3=7ffcdde5b680 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.195000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.196000 audit: BPF prog-id=213 op=UNLOAD Jan 28 06:57:16.196000 audit[4438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1a04dd10 a2=0 a3=6a3ce49bda0227b9 items=0 ppid=4216 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:57:16.210000 audit: BPF prog-id=200 op=UNLOAD Jan 28 06:57:16.210000 audit[4216]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0006d4bc0 a2=0 a3=0 items=0 ppid=4195 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.210000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 06:57:16.299000 audit[4475]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:16.299000 audit[4475]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc84ec5c30 a2=0 a3=7ffc84ec5c1c items=0 ppid=4216 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.299000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:16.306000 audit[4477]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4477 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:16.306000 audit[4477]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffc11f3100 a2=0 a3=7fffc11f30ec items=0 ppid=4216 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.306000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:16.308000 audit[4476]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4476 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:16.308000 audit[4476]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcbb4a38e0 a2=0 a3=7ffcbb4a38cc items=0 ppid=4216 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.308000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:16.323000 audit[4481]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4481 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:16.323000 audit[4481]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffd1a4c0110 a2=0 a3=7ffd1a4c00fc items=0 ppid=4216 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:16.323000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:16.368396 containerd[1638]: time="2026-01-28T06:57:16.368329785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:16.369687 containerd[1638]: time="2026-01-28T06:57:16.369643919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:57:16.369907 containerd[1638]: time="2026-01-28T06:57:16.369761365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:16.370217 kubelet[2936]: E0128 06:57:16.370134 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:57:16.371404 kubelet[2936]: E0128 06:57:16.370245 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:57:16.390691 kubelet[2936]: E0128 06:57:16.390468 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:577d02453e684fc79960ed8b50e8d722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:16.393152 containerd[1638]: time="2026-01-28T06:57:16.392832343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:57:16.699420 containerd[1638]: time="2026-01-28T06:57:16.699213975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:16.701008 containerd[1638]: time="2026-01-28T06:57:16.700934623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:57:16.701365 containerd[1638]: time="2026-01-28T06:57:16.700979061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:16.701527 kubelet[2936]: E0128 06:57:16.701323 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:57:16.701527 kubelet[2936]: E0128 06:57:16.701396 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:57:16.701712 kubelet[2936]: E0128 06:57:16.701626 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:16.703097 kubelet[2936]: E0128 06:57:16.703018 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:57:16.891341 systemd-networkd[1540]: vxlan.calico: Gained IPv6LL Jan 28 06:57:17.275403 systemd-networkd[1540]: cali9e4b7d97ec5: Gained IPv6LL Jan 28 06:57:17.399326 kubelet[2936]: E0128 06:57:17.399162 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:57:17.440000 audit[4491]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:17.450382 kernel: kauditd_printk_skb: 225 callbacks suppressed Jan 28 06:57:17.451302 kernel: audit: type=1325 audit(1769583437.440:659): table=filter:125 family=2 entries=20 op=nft_register_rule pid=4491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:17.440000 audit[4491]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc881207f0 a2=0 a3=7ffc881207dc items=0 ppid=3048 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:17.466241 kernel: audit: type=1300 audit(1769583437.440:659): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc881207f0 a2=0 a3=7ffc881207dc items=0 ppid=3048 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:17.466442 kernel: audit: type=1327 audit(1769583437.440:659): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:17.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:17.454000 audit[4491]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:17.470136 kernel: audit: type=1325 audit(1769583437.454:660): table=nat:126 family=2 entries=14 op=nft_register_rule pid=4491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:17.454000 audit[4491]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc881207f0 a2=0 a3=0 items=0 ppid=3048 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:17.474042 kernel: audit: type=1300 audit(1769583437.454:660): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc881207f0 a2=0 a3=0 items=0 ppid=3048 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:17.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:17.484506 kernel: audit: type=1327 audit(1769583437.454:660): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:20.980910 containerd[1638]: time="2026-01-28T06:57:20.980829750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9zxml,Uid:b3b161cf-39ef-4b57-bdfb-9046b0dd729b,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:21.162377 systemd-networkd[1540]: calie07f1b9c755: Link UP Jan 28 06:57:21.165246 systemd-networkd[1540]: calie07f1b9c755: Gained carrier Jan 28 06:57:21.195849 containerd[1638]: 2026-01-28 06:57:21.049 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0 goldmane-666569f655- calico-system b3b161cf-39ef-4b57-bdfb-9046b0dd729b 870 0 2026-01-28 06:56:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com goldmane-666569f655-9zxml eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie07f1b9c755 [] [] }} ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-" Jan 28 06:57:21.195849 containerd[1638]: 2026-01-28 06:57:21.050 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" Jan 28 06:57:21.195849 containerd[1638]: 2026-01-28 06:57:21.096 [INFO][4504] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" HandleID="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Workload="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.096 [INFO][4504] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" HandleID="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Workload="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gf17r.gb1.brightbox.com", "pod":"goldmane-666569f655-9zxml", "timestamp":"2026-01-28 06:57:21.096108226 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.096 [INFO][4504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.096 [INFO][4504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.096 [INFO][4504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.108 [INFO][4504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.116 [INFO][4504] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.122 [INFO][4504] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.126 [INFO][4504] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.196319 containerd[1638]: 2026-01-28 06:57:21.129 [INFO][4504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.198827 containerd[1638]: 2026-01-28 06:57:21.129 [INFO][4504] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.198827 containerd[1638]: 2026-01-28 06:57:21.131 [INFO][4504] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab Jan 28 06:57:21.198827 containerd[1638]: 2026-01-28 06:57:21.138 [INFO][4504] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.198827 containerd[1638]: 2026-01-28 06:57:21.150 [INFO][4504] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.194/26] block=192.168.38.192/26 handle="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.198827 containerd[1638]: 2026-01-28 06:57:21.151 [INFO][4504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.194/26] handle="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:21.198827 containerd[1638]: 2026-01-28 06:57:21.151 [INFO][4504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:21.198827 containerd[1638]: 2026-01-28 06:57:21.151 [INFO][4504] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.194/26] IPv6=[] ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" HandleID="k8s-pod-network.00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Workload="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" Jan 28 06:57:21.200396 containerd[1638]: 2026-01-28 06:57:21.156 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b3b161cf-39ef-4b57-bdfb-9046b0dd729b", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-9zxml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie07f1b9c755", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:21.200530 containerd[1638]: 2026-01-28 06:57:21.156 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.194/32] ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" Jan 28 06:57:21.200530 containerd[1638]: 2026-01-28 06:57:21.156 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie07f1b9c755 ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" Jan 28 06:57:21.200530 containerd[1638]: 2026-01-28 06:57:21.164 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" Jan 28 06:57:21.200679 containerd[1638]: 2026-01-28 06:57:21.165 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b3b161cf-39ef-4b57-bdfb-9046b0dd729b", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab", Pod:"goldmane-666569f655-9zxml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie07f1b9c755", MAC:"52:28:33:a0:4f:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:21.200798 containerd[1638]: 2026-01-28 06:57:21.191 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" Namespace="calico-system" Pod="goldmane-666569f655-9zxml" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-goldmane--666569f655--9zxml-eth0" Jan 28 06:57:21.238000 audit[4528]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:21.240524 containerd[1638]: time="2026-01-28T06:57:21.239940265Z" level=info msg="connecting to shim 00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab" address="unix:///run/containerd/s/9e38e96ca64f0b0a431762d33e71bc74493e0b182b877f8eb9d316e88d964a2a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:21.248008 kernel: audit: type=1325 audit(1769583441.238:661): table=filter:127 family=2 entries=44 op=nft_register_chain pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:21.238000 audit[4528]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffcf7005bf0 a2=0 a3=7ffcf7005bdc items=0 ppid=4216 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.259281 kernel: audit: type=1300 audit(1769583441.238:661): arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffcf7005bf0 a2=0 a3=7ffcf7005bdc items=0 ppid=4216 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.238000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:21.270613 kernel: audit: type=1327 audit(1769583441.238:661): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:21.318390 systemd[1]: Started cri-containerd-00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab.scope - libcontainer container 00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab. Jan 28 06:57:21.347108 kernel: audit: type=1334 audit(1769583441.343:662): prog-id=215 op=LOAD Jan 28 06:57:21.343000 audit: BPF prog-id=215 op=LOAD Jan 28 06:57:21.347000 audit: BPF prog-id=216 op=LOAD Jan 28 06:57:21.347000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4533 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383931623639333936616234323931623639393538346638623636 Jan 28 06:57:21.347000 audit: BPF prog-id=216 op=UNLOAD Jan 28 06:57:21.347000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383931623639333936616234323931623639393538346638623636 Jan 28 06:57:21.347000 audit: BPF prog-id=217 op=LOAD Jan 28 06:57:21.347000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4533 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383931623639333936616234323931623639393538346638623636 Jan 28 06:57:21.347000 audit: BPF prog-id=218 op=LOAD Jan 28 06:57:21.347000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4533 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383931623639333936616234323931623639393538346638623636 Jan 28 06:57:21.347000 audit: BPF prog-id=218 op=UNLOAD Jan 28 06:57:21.347000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383931623639333936616234323931623639393538346638623636 Jan 28 06:57:21.347000 audit: BPF prog-id=217 op=UNLOAD Jan 28 06:57:21.347000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4533 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383931623639333936616234323931623639393538346638623636 Jan 28 06:57:21.347000 audit: BPF prog-id=219 op=LOAD Jan 28 06:57:21.347000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4533 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:21.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383931623639333936616234323931623639393538346638623636 Jan 28 06:57:21.421506 containerd[1638]: time="2026-01-28T06:57:21.421400423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9zxml,Uid:b3b161cf-39ef-4b57-bdfb-9046b0dd729b,Namespace:calico-system,Attempt:0,} returns sandbox id \"00891b69396ab4291b699584f8b66ff3350c2e815d26332b99e2f9bf2b7d59ab\"" Jan 28 06:57:21.426734 containerd[1638]: time="2026-01-28T06:57:21.426655356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:57:21.741394 containerd[1638]: time="2026-01-28T06:57:21.741260719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:21.743079 containerd[1638]: time="2026-01-28T06:57:21.742923675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:57:21.743079 containerd[1638]: time="2026-01-28T06:57:21.743034775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:21.743453 kubelet[2936]: E0128 06:57:21.743371 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:57:21.744143 kubelet[2936]: E0128 06:57:21.743471 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:57:21.744143 kubelet[2936]: E0128 06:57:21.743764 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr5hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:21.745166 kubelet[2936]: E0128 06:57:21.745036 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:57:21.978101 containerd[1638]: time="2026-01-28T06:57:21.978015315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74fbf496f-6pckk,Uid:c1237d78-1650-42b9-ac4f-842b943ada74,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:21.978828 containerd[1638]: time="2026-01-28T06:57:21.978494794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg29g,Uid:f8fa255f-5f12-4516-9e66-e8ceb04f1aa0,Namespace:kube-system,Attempt:0,}" Jan 28 06:57:21.979088 containerd[1638]: time="2026-01-28T06:57:21.979055587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-gv7nr,Uid:11f0b3ec-48a9-43d4-ba78-4be405b03a1e,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:57:22.264193 systemd-networkd[1540]: calidd18950cf63: Link UP Jan 28 06:57:22.267771 systemd-networkd[1540]: calie07f1b9c755: Gained IPv6LL Jan 28 06:57:22.270683 systemd-networkd[1540]: calidd18950cf63: Gained carrier Jan 28 06:57:22.299412 containerd[1638]: 2026-01-28 06:57:22.081 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0 calico-apiserver-6bf65c897b- calico-apiserver 11f0b3ec-48a9-43d4-ba78-4be405b03a1e 868 0 2026-01-28 06:56:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf65c897b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com calico-apiserver-6bf65c897b-gv7nr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd18950cf63 [] [] }} ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-" Jan 28 06:57:22.299412 containerd[1638]: 2026-01-28 06:57:22.083 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" Jan 28 06:57:22.299412 containerd[1638]: 2026-01-28 06:57:22.181 [INFO][4618] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" HandleID="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.182 [INFO][4618] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" HandleID="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gf17r.gb1.brightbox.com", "pod":"calico-apiserver-6bf65c897b-gv7nr", "timestamp":"2026-01-28 06:57:22.181875551 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.182 [INFO][4618] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.182 [INFO][4618] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.182 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.209 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.218 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.225 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.228 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.300797 containerd[1638]: 2026-01-28 06:57:22.231 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.301278 containerd[1638]: 2026-01-28 06:57:22.231 [INFO][4618] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.301278 containerd[1638]: 2026-01-28 06:57:22.233 [INFO][4618] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a Jan 28 06:57:22.301278 containerd[1638]: 2026-01-28 06:57:22.240 [INFO][4618] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.301278 containerd[1638]: 2026-01-28 06:57:22.247 [INFO][4618] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.195/26] block=192.168.38.192/26 handle="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.301278 containerd[1638]: 2026-01-28 06:57:22.247 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.195/26] handle="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.301278 containerd[1638]: 2026-01-28 06:57:22.247 [INFO][4618] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:22.301278 containerd[1638]: 2026-01-28 06:57:22.248 [INFO][4618] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.195/26] IPv6=[] ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" HandleID="k8s-pod-network.a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" Jan 28 06:57:22.301600 containerd[1638]: 2026-01-28 06:57:22.253 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0", GenerateName:"calico-apiserver-6bf65c897b-", Namespace:"calico-apiserver", SelfLink:"", UID:"11f0b3ec-48a9-43d4-ba78-4be405b03a1e", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf65c897b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6bf65c897b-gv7nr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd18950cf63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:22.301712 containerd[1638]: 2026-01-28 06:57:22.253 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.195/32] ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" Jan 28 06:57:22.301712 containerd[1638]: 2026-01-28 06:57:22.253 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd18950cf63 ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" Jan 28 06:57:22.301712 containerd[1638]: 2026-01-28 06:57:22.272 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" Jan 28 06:57:22.301847 containerd[1638]: 2026-01-28 06:57:22.273 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0", GenerateName:"calico-apiserver-6bf65c897b-", Namespace:"calico-apiserver", SelfLink:"", UID:"11f0b3ec-48a9-43d4-ba78-4be405b03a1e", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf65c897b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a", Pod:"calico-apiserver-6bf65c897b-gv7nr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd18950cf63", MAC:"22:48:28:48:96:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:22.301937 containerd[1638]: 2026-01-28 06:57:22.289 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-gv7nr" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--gv7nr-eth0" Jan 28 06:57:22.375984 containerd[1638]: time="2026-01-28T06:57:22.375559935Z" level=info msg="connecting to shim a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a" address="unix:///run/containerd/s/4679cfad7ffb7cdd58a3407b66e4683afea567ea23c85196ffe85304c9822ac1" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:22.394000 audit[4660]: NETFILTER_CFG table=filter:128 family=2 entries=54 op=nft_register_chain pid=4660 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:22.394000 audit[4660]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7fff39e2ca80 a2=0 a3=7fff39e2ca6c items=0 ppid=4216 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.394000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:22.443778 kubelet[2936]: E0128 06:57:22.443719 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:57:22.448328 systemd-networkd[1540]: cali922cab05d62: Link UP Jan 28 06:57:22.455295 systemd-networkd[1540]: cali922cab05d62: Gained carrier Jan 28 06:57:22.504397 containerd[1638]: 2026-01-28 06:57:22.107 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0 calico-kube-controllers-74fbf496f- calico-system c1237d78-1650-42b9-ac4f-842b943ada74 871 0 2026-01-28 06:56:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74fbf496f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com calico-kube-controllers-74fbf496f-6pckk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali922cab05d62 [] [] }} ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-" Jan 28 06:57:22.504397 containerd[1638]: 2026-01-28 06:57:22.107 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" Jan 28 06:57:22.504397 containerd[1638]: 2026-01-28 06:57:22.204 [INFO][4624] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" HandleID="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.204 [INFO][4624] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" HandleID="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gf17r.gb1.brightbox.com", "pod":"calico-kube-controllers-74fbf496f-6pckk", "timestamp":"2026-01-28 06:57:22.204428362 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.204 [INFO][4624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.248 [INFO][4624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.248 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.310 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.323 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.330 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.333 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.506714 containerd[1638]: 2026-01-28 06:57:22.339 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.507722 containerd[1638]: 2026-01-28 06:57:22.340 [INFO][4624] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.507722 containerd[1638]: 2026-01-28 06:57:22.349 [INFO][4624] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7 Jan 28 06:57:22.507722 containerd[1638]: 2026-01-28 06:57:22.368 [INFO][4624] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.507722 containerd[1638]: 2026-01-28 06:57:22.400 [INFO][4624] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.196/26] block=192.168.38.192/26 handle="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.507722 containerd[1638]: 2026-01-28 06:57:22.400 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.196/26] handle="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.507722 containerd[1638]: 2026-01-28 06:57:22.401 [INFO][4624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:22.507722 containerd[1638]: 2026-01-28 06:57:22.401 [INFO][4624] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.196/26] IPv6=[] ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" HandleID="k8s-pod-network.39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" Jan 28 06:57:22.507520 systemd[1]: Started cri-containerd-a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a.scope - libcontainer container a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a. Jan 28 06:57:22.510443 containerd[1638]: 2026-01-28 06:57:22.418 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0", GenerateName:"calico-kube-controllers-74fbf496f-", Namespace:"calico-system", SelfLink:"", UID:"c1237d78-1650-42b9-ac4f-842b943ada74", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74fbf496f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-74fbf496f-6pckk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali922cab05d62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:22.510568 containerd[1638]: 2026-01-28 06:57:22.418 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.196/32] ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" Jan 28 06:57:22.510568 containerd[1638]: 2026-01-28 06:57:22.418 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali922cab05d62 ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" Jan 28 06:57:22.510568 containerd[1638]: 2026-01-28 06:57:22.459 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" Jan 28 06:57:22.510715 containerd[1638]: 2026-01-28 06:57:22.460 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0", GenerateName:"calico-kube-controllers-74fbf496f-", Namespace:"calico-system", SelfLink:"", UID:"c1237d78-1650-42b9-ac4f-842b943ada74", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74fbf496f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7", Pod:"calico-kube-controllers-74fbf496f-6pckk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali922cab05d62", MAC:"86:26:e8:a3:2a:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:22.510822 containerd[1638]: 2026-01-28 06:57:22.492 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" Namespace="calico-system" Pod="calico-kube-controllers-74fbf496f-6pckk" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--kube--controllers--74fbf496f--6pckk-eth0" Jan 28 06:57:22.579016 systemd-networkd[1540]: cali4ed3ec48b79: Link UP Jan 28 06:57:22.592370 containerd[1638]: time="2026-01-28T06:57:22.591664564Z" level=info msg="connecting to shim 39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7" address="unix:///run/containerd/s/123e7e54f1a31b262a4a4b2291e77e5856df82ba24c05c9f2f2721062690a930" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:22.592256 systemd-networkd[1540]: cali4ed3ec48b79: Gained carrier Jan 28 06:57:22.626082 containerd[1638]: 2026-01-28 06:57:22.101 [INFO][4581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0 coredns-674b8bbfcf- kube-system f8fa255f-5f12-4516-9e66-e8ceb04f1aa0 873 0 2026-01-28 06:56:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com coredns-674b8bbfcf-bg29g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ed3ec48b79 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-" Jan 28 06:57:22.626082 containerd[1638]: 2026-01-28 06:57:22.102 [INFO][4581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" Jan 28 06:57:22.626082 containerd[1638]: 2026-01-28 06:57:22.208 [INFO][4626] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" HandleID="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Workload="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.210 [INFO][4626] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" HandleID="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Workload="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003015e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gf17r.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-bg29g", "timestamp":"2026-01-28 06:57:22.208484045 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.210 [INFO][4626] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.403 [INFO][4626] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.406 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.443 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.479 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.518 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.524 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627048 containerd[1638]: 2026-01-28 06:57:22.529 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627425 containerd[1638]: 2026-01-28 06:57:22.530 [INFO][4626] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627425 containerd[1638]: 2026-01-28 06:57:22.533 [INFO][4626] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0 Jan 28 06:57:22.627425 containerd[1638]: 2026-01-28 06:57:22.549 [INFO][4626] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627425 containerd[1638]: 2026-01-28 06:57:22.563 [INFO][4626] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.197/26] block=192.168.38.192/26 handle="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627425 containerd[1638]: 2026-01-28 06:57:22.563 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.197/26] handle="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:22.627425 containerd[1638]: 2026-01-28 06:57:22.564 [INFO][4626] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:22.627425 containerd[1638]: 2026-01-28 06:57:22.564 [INFO][4626] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.197/26] IPv6=[] ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" HandleID="k8s-pod-network.e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Workload="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" Jan 28 06:57:22.628553 containerd[1638]: 2026-01-28 06:57:22.567 [INFO][4581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f8fa255f-5f12-4516-9e66-e8ceb04f1aa0", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-bg29g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ed3ec48b79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:22.628553 containerd[1638]: 2026-01-28 06:57:22.567 [INFO][4581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.197/32] ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" Jan 28 06:57:22.628553 containerd[1638]: 2026-01-28 06:57:22.567 [INFO][4581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ed3ec48b79 ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" Jan 28 06:57:22.628553 containerd[1638]: 2026-01-28 06:57:22.596 [INFO][4581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" Jan 28 06:57:22.628553 containerd[1638]: 2026-01-28 06:57:22.598 [INFO][4581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f8fa255f-5f12-4516-9e66-e8ceb04f1aa0", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0", Pod:"coredns-674b8bbfcf-bg29g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ed3ec48b79", MAC:"3a:90:0f:ea:81:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:22.628553 containerd[1638]: 2026-01-28 06:57:22.616 [INFO][4581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bg29g" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bg29g-eth0" Jan 28 06:57:22.635697 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 28 06:57:22.635780 kernel: audit: type=1325 audit(1769583442.628:671): table=filter:129 family=2 entries=20 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:22.628000 audit[4713]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:22.649058 kernel: audit: type=1300 audit(1769583442.628:671): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffded997b40 a2=0 a3=7ffded997b2c items=0 ppid=3048 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.628000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffded997b40 a2=0 a3=7ffded997b2c items=0 ppid=3048 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:22.657582 kernel: audit: type=1327 audit(1769583442.628:671): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:22.657709 kernel: audit: type=1334 audit(1769583442.638:672): prog-id=220 op=LOAD Jan 28 06:57:22.638000 audit: BPF prog-id=220 op=LOAD Jan 28 06:57:22.646000 audit[4713]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:22.667714 kernel: audit: type=1325 audit(1769583442.646:673): table=nat:130 family=2 entries=14 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:22.667780 kernel: audit: type=1300 audit(1769583442.646:673): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffded997b40 a2=0 a3=0 items=0 ppid=3048 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.646000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffded997b40 a2=0 a3=0 items=0 ppid=3048 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.673128 kernel: audit: type=1327 audit(1769583442.646:673): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:22.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:22.650000 audit: BPF prog-id=221 op=LOAD Jan 28 06:57:22.687056 kernel: audit: type=1334 audit(1769583442.650:674): prog-id=221 op=LOAD Jan 28 06:57:22.650000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.693223 kernel: audit: type=1300 audit(1769583442.650:674): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.705743 kernel: audit: type=1327 audit(1769583442.650:674): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.650000 audit: BPF prog-id=221 op=UNLOAD Jan 28 06:57:22.650000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.658000 audit: BPF prog-id=222 op=LOAD Jan 28 06:57:22.658000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.659000 audit: BPF prog-id=223 op=LOAD Jan 28 06:57:22.659000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.661000 audit: BPF prog-id=223 op=UNLOAD Jan 28 06:57:22.661000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.661000 audit: BPF prog-id=222 op=UNLOAD Jan 28 06:57:22.661000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.661000 audit: BPF prog-id=224 op=LOAD Jan 28 06:57:22.661000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4655 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643532633366323335626463363166383834326662323066333264 Jan 28 06:57:22.741974 containerd[1638]: time="2026-01-28T06:57:22.741896802Z" level=info msg="connecting to shim e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0" address="unix:///run/containerd/s/7a0b3f1802c886708ea32cc61669866addae3dbba47c372dd70fa8d186b3b999" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:22.751250 systemd[1]: Started cri-containerd-39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7.scope - libcontainer container 39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7. Jan 28 06:57:22.792000 audit[4762]: NETFILTER_CFG table=filter:131 family=2 entries=50 op=nft_register_chain pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:22.792000 audit[4762]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7ffd907aa5e0 a2=0 a3=7ffd907aa5cc items=0 ppid=4216 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:22.826000 audit: BPF prog-id=225 op=LOAD Jan 28 06:57:22.832195 systemd[1]: Started cri-containerd-e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0.scope - libcontainer container e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0. Jan 28 06:57:22.831000 audit: BPF prog-id=226 op=LOAD Jan 28 06:57:22.831000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4708 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339646661323738656334303236383930316463313232653061396631 Jan 28 06:57:22.831000 audit: BPF prog-id=226 op=UNLOAD Jan 28 06:57:22.831000 audit[4729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339646661323738656334303236383930316463313232653061396631 Jan 28 06:57:22.833000 audit: BPF prog-id=227 op=LOAD Jan 28 06:57:22.833000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4708 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339646661323738656334303236383930316463313232653061396631 Jan 28 06:57:22.835000 audit: BPF prog-id=228 op=LOAD Jan 28 06:57:22.835000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4708 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339646661323738656334303236383930316463313232653061396631 Jan 28 06:57:22.835000 audit: BPF prog-id=228 op=UNLOAD Jan 28 06:57:22.835000 audit[4729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339646661323738656334303236383930316463313232653061396631 Jan 28 06:57:22.836000 audit: BPF prog-id=227 op=UNLOAD Jan 28 06:57:22.836000 audit[4729]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339646661323738656334303236383930316463313232653061396631 Jan 28 06:57:22.836000 audit: BPF prog-id=229 op=LOAD Jan 28 06:57:22.836000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4708 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339646661323738656334303236383930316463313232653061396631 Jan 28 06:57:22.851565 containerd[1638]: time="2026-01-28T06:57:22.851465867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-gv7nr,Uid:11f0b3ec-48a9-43d4-ba78-4be405b03a1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a4d52c3f235bdc61f8842fb20f32d4a104ffea45e00bb0ec5125cd6fd0046c3a\"" Jan 28 06:57:22.857312 containerd[1638]: time="2026-01-28T06:57:22.857234032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:57:22.864000 audit: BPF prog-id=230 op=LOAD Jan 28 06:57:22.867000 audit: BPF prog-id=231 op=LOAD Jan 28 06:57:22.867000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4750 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616565633330383432373839373436636463373530363331623563 Jan 28 06:57:22.867000 audit: BPF prog-id=231 op=UNLOAD Jan 28 06:57:22.867000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616565633330383432373839373436636463373530363331623563 Jan 28 06:57:22.869000 audit: BPF prog-id=232 op=LOAD Jan 28 06:57:22.869000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4750 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616565633330383432373839373436636463373530363331623563 Jan 28 06:57:22.870000 audit: BPF prog-id=233 op=LOAD Jan 28 06:57:22.870000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4750 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616565633330383432373839373436636463373530363331623563 Jan 28 06:57:22.870000 audit: BPF prog-id=233 op=UNLOAD Jan 28 06:57:22.870000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616565633330383432373839373436636463373530363331623563 Jan 28 06:57:22.870000 audit: BPF prog-id=232 op=UNLOAD Jan 28 06:57:22.870000 audit[4764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616565633330383432373839373436636463373530363331623563 Jan 28 06:57:22.870000 audit: BPF prog-id=234 op=LOAD Jan 28 06:57:22.870000 audit[4764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4750 pid=4764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616565633330383432373839373436636463373530363331623563 Jan 28 06:57:22.890000 audit[4800]: NETFILTER_CFG table=filter:132 family=2 entries=50 op=nft_register_chain pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:22.890000 audit[4800]: SYSCALL arch=c000003e syscall=46 success=yes exit=24912 a0=3 a1=7ffcd4084240 a2=0 a3=7ffcd408422c items=0 ppid=4216 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:22.890000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:22.952695 containerd[1638]: time="2026-01-28T06:57:22.952604889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bg29g,Uid:f8fa255f-5f12-4516-9e66-e8ceb04f1aa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0\"" Jan 28 06:57:22.973690 containerd[1638]: time="2026-01-28T06:57:22.973366101Z" level=info msg="CreateContainer within sandbox \"e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 06:57:22.979774 containerd[1638]: time="2026-01-28T06:57:22.979742486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9g7w,Uid:48297015-9ea4-408f-b23a-bc18759d877d,Namespace:kube-system,Attempt:0,}" Jan 28 06:57:23.048104 containerd[1638]: time="2026-01-28T06:57:23.046116433Z" level=info msg="Container d343018518af63736752a1ee5c922457d394ef08adcc237c0edadbceaed11dba: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:57:23.046214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592885493.mount: Deactivated successfully. Jan 28 06:57:23.057432 containerd[1638]: time="2026-01-28T06:57:23.056572952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74fbf496f-6pckk,Uid:c1237d78-1650-42b9-ac4f-842b943ada74,Namespace:calico-system,Attempt:0,} returns sandbox id \"39dfa278ec40268901dc122e0a9f11afc7f0daca5b680f74a91c2360ca18dba7\"" Jan 28 06:57:23.057563 containerd[1638]: time="2026-01-28T06:57:23.057190127Z" level=info msg="CreateContainer within sandbox \"e1aeec30842789746cdc750631b5c2a01f543f61ce5a3eedad8dd2c2c37e50f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d343018518af63736752a1ee5c922457d394ef08adcc237c0edadbceaed11dba\"" Jan 28 06:57:23.058642 containerd[1638]: time="2026-01-28T06:57:23.058148177Z" level=info msg="StartContainer for \"d343018518af63736752a1ee5c922457d394ef08adcc237c0edadbceaed11dba\"" Jan 28 06:57:23.063005 containerd[1638]: time="2026-01-28T06:57:23.062963588Z" level=info msg="connecting to shim d343018518af63736752a1ee5c922457d394ef08adcc237c0edadbceaed11dba" address="unix:///run/containerd/s/7a0b3f1802c886708ea32cc61669866addae3dbba47c372dd70fa8d186b3b999" protocol=ttrpc version=3 Jan 28 06:57:23.104435 systemd[1]: Started cri-containerd-d343018518af63736752a1ee5c922457d394ef08adcc237c0edadbceaed11dba.scope - libcontainer container d343018518af63736752a1ee5c922457d394ef08adcc237c0edadbceaed11dba. Jan 28 06:57:23.141000 audit: BPF prog-id=235 op=LOAD Jan 28 06:57:23.142000 audit: BPF prog-id=236 op=LOAD Jan 28 06:57:23.142000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4750 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433343330313835313861663633373336373532613165653563393232 Jan 28 06:57:23.142000 audit: BPF prog-id=236 op=UNLOAD Jan 28 06:57:23.142000 audit[4824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433343330313835313861663633373336373532613165653563393232 Jan 28 06:57:23.144000 audit: BPF prog-id=237 op=LOAD Jan 28 06:57:23.144000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4750 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433343330313835313861663633373336373532613165653563393232 Jan 28 06:57:23.144000 audit: BPF prog-id=238 op=LOAD Jan 28 06:57:23.144000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4750 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433343330313835313861663633373336373532613165653563393232 Jan 28 06:57:23.144000 audit: BPF prog-id=238 op=UNLOAD Jan 28 06:57:23.144000 audit[4824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433343330313835313861663633373336373532613165653563393232 Jan 28 06:57:23.144000 audit: BPF prog-id=237 op=UNLOAD Jan 28 06:57:23.144000 audit[4824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433343330313835313861663633373336373532613165653563393232 Jan 28 06:57:23.144000 audit: BPF prog-id=239 op=LOAD Jan 28 06:57:23.144000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4750 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433343330313835313861663633373336373532613165653563393232 Jan 28 06:57:23.194512 containerd[1638]: time="2026-01-28T06:57:23.193458205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:23.201021 containerd[1638]: time="2026-01-28T06:57:23.199763198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:57:23.202966 containerd[1638]: time="2026-01-28T06:57:23.199817572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:23.202966 containerd[1638]: time="2026-01-28T06:57:23.202646463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:57:23.203086 kubelet[2936]: E0128 06:57:23.201464 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:23.203086 kubelet[2936]: E0128 06:57:23.201537 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:23.203086 kubelet[2936]: E0128 06:57:23.201896 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:23.204241 kubelet[2936]: E0128 06:57:23.204018 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:57:23.216136 containerd[1638]: time="2026-01-28T06:57:23.216082391Z" level=info msg="StartContainer for \"d343018518af63736752a1ee5c922457d394ef08adcc237c0edadbceaed11dba\" returns successfully" Jan 28 06:57:23.337259 systemd-networkd[1540]: calied0f75eb99d: Link UP Jan 28 06:57:23.337614 systemd-networkd[1540]: calied0f75eb99d: Gained carrier Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.131 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0 coredns-674b8bbfcf- kube-system 48297015-9ea4-408f-b23a-bc18759d877d 861 0 2026-01-28 06:56:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com coredns-674b8bbfcf-l9g7w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied0f75eb99d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.131 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.224 [INFO][4847] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" HandleID="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Workload="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.224 [INFO][4847] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" HandleID="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Workload="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gf17r.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-l9g7w", "timestamp":"2026-01-28 06:57:23.224080421 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.224 [INFO][4847] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.224 [INFO][4847] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.224 [INFO][4847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.246 [INFO][4847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.264 [INFO][4847] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.276 [INFO][4847] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.279 [INFO][4847] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.283 [INFO][4847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.284 [INFO][4847] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.286 [INFO][4847] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97 Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.306 [INFO][4847] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.319 [INFO][4847] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.198/26] block=192.168.38.192/26 handle="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.319 [INFO][4847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.198/26] handle="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.319 [INFO][4847] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:23.378076 containerd[1638]: 2026-01-28 06:57:23.319 [INFO][4847] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.198/26] IPv6=[] ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" HandleID="k8s-pod-network.65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Workload="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" Jan 28 06:57:23.380638 containerd[1638]: 2026-01-28 06:57:23.324 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48297015-9ea4-408f-b23a-bc18759d877d", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-l9g7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied0f75eb99d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:23.380638 containerd[1638]: 2026-01-28 06:57:23.324 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.198/32] ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" Jan 28 06:57:23.380638 containerd[1638]: 2026-01-28 06:57:23.324 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied0f75eb99d ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" Jan 28 06:57:23.380638 containerd[1638]: 2026-01-28 06:57:23.341 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" Jan 28 06:57:23.380638 containerd[1638]: 2026-01-28 06:57:23.342 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48297015-9ea4-408f-b23a-bc18759d877d", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97", Pod:"coredns-674b8bbfcf-l9g7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied0f75eb99d", MAC:"ba:c8:3a:ae:78:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:23.380638 containerd[1638]: 2026-01-28 06:57:23.371 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9g7w" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l9g7w-eth0" Jan 28 06:57:23.429197 containerd[1638]: time="2026-01-28T06:57:23.428904343Z" level=info msg="connecting to shim 65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97" address="unix:///run/containerd/s/97551920cc7e2baa8e073ddcf49ffc60dc8b7a0856360d28143ea3c82cc77803" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:23.460248 kubelet[2936]: E0128 06:57:23.460174 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:57:23.460524 kubelet[2936]: E0128 06:57:23.460331 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:57:23.509817 systemd[1]: Started cri-containerd-65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97.scope - libcontainer container 65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97. Jan 28 06:57:23.520855 containerd[1638]: time="2026-01-28T06:57:23.520720512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:23.522591 containerd[1638]: time="2026-01-28T06:57:23.522231379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:23.522874 containerd[1638]: time="2026-01-28T06:57:23.522765259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:57:23.524236 kubelet[2936]: E0128 06:57:23.524169 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:57:23.524623 kubelet[2936]: E0128 06:57:23.524412 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:57:23.529343 kubelet[2936]: E0128 06:57:23.528058 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6hr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:23.531865 kubelet[2936]: E0128 06:57:23.531795 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:57:23.554085 kubelet[2936]: I0128 06:57:23.550000 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bg29g" podStartSLOduration=62.549859564 podStartE2EDuration="1m2.549859564s" podCreationTimestamp="2026-01-28 06:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:57:23.500256444 +0000 UTC m=+68.736913969" watchObservedRunningTime="2026-01-28 06:57:23.549859564 +0000 UTC m=+68.786517083" Jan 28 06:57:23.581000 audit: BPF prog-id=240 op=LOAD Jan 28 06:57:23.583000 audit: BPF prog-id=241 op=LOAD Jan 28 06:57:23.583000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4876 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613734326537303634343239323435663336666638623566353538 Jan 28 06:57:23.583000 audit: BPF prog-id=241 op=UNLOAD Jan 28 06:57:23.583000 audit[4888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613734326537303634343239323435663336666638623566353538 Jan 28 06:57:23.584000 audit: BPF prog-id=242 op=LOAD Jan 28 06:57:23.584000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4876 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613734326537303634343239323435663336666638623566353538 Jan 28 06:57:23.584000 audit: BPF prog-id=243 op=LOAD Jan 28 06:57:23.584000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4876 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613734326537303634343239323435663336666638623566353538 Jan 28 06:57:23.584000 audit: BPF prog-id=243 op=UNLOAD Jan 28 06:57:23.584000 audit[4888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613734326537303634343239323435663336666638623566353538 Jan 28 06:57:23.585000 audit: BPF prog-id=242 op=UNLOAD Jan 28 06:57:23.585000 audit[4888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613734326537303634343239323435663336666638623566353538 Jan 28 06:57:23.585000 audit: BPF prog-id=244 op=LOAD Jan 28 06:57:23.585000 audit[4888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4876 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635613734326537303634343239323435663336666638623566353538 Jan 28 06:57:23.648000 audit[4917]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:23.648000 audit[4917]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe786fadc0 a2=0 a3=7ffe786fadac items=0 ppid=3048 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:23.653000 audit[4917]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:23.653000 audit[4917]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe786fadc0 a2=0 a3=0 items=0 ppid=3048 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:23.720582 containerd[1638]: time="2026-01-28T06:57:23.720434995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9g7w,Uid:48297015-9ea4-408f-b23a-bc18759d877d,Namespace:kube-system,Attempt:0,} returns sandbox id \"65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97\"" Jan 28 06:57:23.733146 containerd[1638]: time="2026-01-28T06:57:23.732640032Z" level=info msg="CreateContainer within sandbox \"65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 06:57:23.737000 audit[4919]: NETFILTER_CFG table=filter:135 family=2 entries=50 op=nft_register_chain pid=4919 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:23.737000 audit[4919]: SYSCALL arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffed08edd20 a2=0 a3=7ffed08edd0c items=0 ppid=4216 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.737000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:23.739121 systemd-networkd[1540]: cali922cab05d62: Gained IPv6LL Jan 28 06:57:23.758699 containerd[1638]: time="2026-01-28T06:57:23.758618550Z" level=info msg="Container dfafe15313abd21c6f4bbc49d7d4af2790b274ca1eb6d2cd8072176c6f8c6227: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:57:23.762000 audit[4928]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:23.762000 audit[4928]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc25f7e410 a2=0 a3=7ffc25f7e3fc items=0 ppid=3048 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:23.770548 containerd[1638]: time="2026-01-28T06:57:23.770470696Z" level=info msg="CreateContainer within sandbox \"65a742e7064429245f36ff8b5f5586a0bc1ca5c14cc3bf1bc63962315cc87e97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dfafe15313abd21c6f4bbc49d7d4af2790b274ca1eb6d2cd8072176c6f8c6227\"" Jan 28 06:57:23.772844 containerd[1638]: time="2026-01-28T06:57:23.772090236Z" level=info msg="StartContainer for \"dfafe15313abd21c6f4bbc49d7d4af2790b274ca1eb6d2cd8072176c6f8c6227\"" Jan 28 06:57:23.771000 audit[4928]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:23.771000 audit[4928]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc25f7e410 a2=0 a3=0 items=0 ppid=3048 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:23.775426 containerd[1638]: time="2026-01-28T06:57:23.775368380Z" level=info msg="connecting to shim dfafe15313abd21c6f4bbc49d7d4af2790b274ca1eb6d2cd8072176c6f8c6227" address="unix:///run/containerd/s/97551920cc7e2baa8e073ddcf49ffc60dc8b7a0856360d28143ea3c82cc77803" protocol=ttrpc version=3 Jan 28 06:57:23.806370 systemd[1]: Started cri-containerd-dfafe15313abd21c6f4bbc49d7d4af2790b274ca1eb6d2cd8072176c6f8c6227.scope - libcontainer container dfafe15313abd21c6f4bbc49d7d4af2790b274ca1eb6d2cd8072176c6f8c6227. Jan 28 06:57:23.833000 audit: BPF prog-id=245 op=LOAD Jan 28 06:57:23.834000 audit: BPF prog-id=246 op=LOAD Jan 28 06:57:23.834000 audit[4929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4876 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616665313533313361626432316336663462626334396437643461 Jan 28 06:57:23.834000 audit: BPF prog-id=246 op=UNLOAD Jan 28 06:57:23.834000 audit[4929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616665313533313361626432316336663462626334396437643461 Jan 28 06:57:23.835000 audit: BPF prog-id=247 op=LOAD Jan 28 06:57:23.835000 audit[4929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4876 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616665313533313361626432316336663462626334396437643461 Jan 28 06:57:23.835000 audit: BPF prog-id=248 op=LOAD Jan 28 06:57:23.835000 audit[4929]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4876 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616665313533313361626432316336663462626334396437643461 Jan 28 06:57:23.835000 audit: BPF prog-id=248 op=UNLOAD Jan 28 06:57:23.835000 audit[4929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616665313533313361626432316336663462626334396437643461 Jan 28 06:57:23.836000 audit: BPF prog-id=247 op=UNLOAD Jan 28 06:57:23.836000 audit[4929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616665313533313361626432316336663462626334396437643461 Jan 28 06:57:23.836000 audit: BPF prog-id=249 op=LOAD Jan 28 06:57:23.836000 audit[4929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4876 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:23.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616665313533313361626432316336663462626334396437643461 Jan 28 06:57:23.867360 containerd[1638]: time="2026-01-28T06:57:23.867272884Z" level=info msg="StartContainer for \"dfafe15313abd21c6f4bbc49d7d4af2790b274ca1eb6d2cd8072176c6f8c6227\" returns successfully" Jan 28 06:57:24.187254 systemd-networkd[1540]: calidd18950cf63: Gained IPv6LL Jan 28 06:57:24.443347 systemd-networkd[1540]: cali4ed3ec48b79: Gained IPv6LL Jan 28 06:57:24.468057 kubelet[2936]: E0128 06:57:24.467619 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:57:24.469394 kubelet[2936]: E0128 06:57:24.469201 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:57:24.504589 kubelet[2936]: I0128 06:57:24.504237 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l9g7w" podStartSLOduration=63.504213988 podStartE2EDuration="1m3.504213988s" podCreationTimestamp="2026-01-28 06:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:57:24.488503748 +0000 UTC m=+69.725161288" watchObservedRunningTime="2026-01-28 06:57:24.504213988 +0000 UTC m=+69.740871508" Jan 28 06:57:24.591000 audit[4964]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=4964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:24.591000 audit[4964]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5007c330 a2=0 a3=7fff5007c31c items=0 ppid=3048 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:24.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:24.597000 audit[4964]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=4964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:24.597000 audit[4964]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff5007c330 a2=0 a3=7fff5007c31c items=0 ppid=3048 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:24.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:24.764232 systemd-networkd[1540]: calied0f75eb99d: Gained IPv6LL Jan 28 06:57:24.976890 containerd[1638]: time="2026-01-28T06:57:24.976822024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjw4,Uid:feae4718-ebbe-416f-b2aa-04c3e4a5379c,Namespace:calico-system,Attempt:0,}" Jan 28 06:57:25.173478 systemd-networkd[1540]: cali8bef59f095c: Link UP Jan 28 06:57:25.176456 systemd-networkd[1540]: cali8bef59f095c: Gained carrier Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.045 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0 csi-node-driver- calico-system feae4718-ebbe-416f-b2aa-04c3e4a5379c 757 0 2026-01-28 06:56:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com csi-node-driver-4hjw4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8bef59f095c [] [] }} ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.046 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.096 [INFO][4977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" HandleID="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Workload="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.096 [INFO][4977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" HandleID="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Workload="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gf17r.gb1.brightbox.com", "pod":"csi-node-driver-4hjw4", "timestamp":"2026-01-28 06:57:25.096062431 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.096 [INFO][4977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.096 [INFO][4977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.096 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.108 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.115 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.122 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.128 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.134 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.134 [INFO][4977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.138 [INFO][4977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7 Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.145 [INFO][4977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.159 [INFO][4977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.199/26] block=192.168.38.192/26 handle="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.159 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.199/26] handle="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.159 [INFO][4977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:25.211415 containerd[1638]: 2026-01-28 06:57:25.159 [INFO][4977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.199/26] IPv6=[] ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" HandleID="k8s-pod-network.cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Workload="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" Jan 28 06:57:25.213809 containerd[1638]: 2026-01-28 06:57:25.164 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"feae4718-ebbe-416f-b2aa-04c3e4a5379c", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-4hjw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bef59f095c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:25.213809 containerd[1638]: 2026-01-28 06:57:25.164 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.199/32] ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" Jan 28 06:57:25.213809 containerd[1638]: 2026-01-28 06:57:25.164 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bef59f095c ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" Jan 28 06:57:25.213809 containerd[1638]: 2026-01-28 06:57:25.177 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" Jan 28 06:57:25.213809 containerd[1638]: 2026-01-28 06:57:25.178 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"feae4718-ebbe-416f-b2aa-04c3e4a5379c", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7", Pod:"csi-node-driver-4hjw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bef59f095c", MAC:"da:4e:c7:ae:6c:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:25.213809 containerd[1638]: 2026-01-28 06:57:25.200 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" Namespace="calico-system" Pod="csi-node-driver-4hjw4" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-csi--node--driver--4hjw4-eth0" Jan 28 06:57:25.243000 audit[4990]: NETFILTER_CFG table=filter:140 family=2 entries=48 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:25.243000 audit[4990]: SYSCALL arch=c000003e syscall=46 success=yes exit=23108 a0=3 a1=7ffd4718ed20 a2=0 a3=7ffd4718ed0c items=0 ppid=4216 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.243000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:25.286325 containerd[1638]: time="2026-01-28T06:57:25.285370892Z" level=info msg="connecting to shim cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7" address="unix:///run/containerd/s/84359b6dc4cebb93da2ca177a80cd771db86ecd8804e815a6fbcb4f840489d48" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:25.356395 systemd[1]: Started cri-containerd-cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7.scope - libcontainer container cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7. Jan 28 06:57:25.375000 audit: BPF prog-id=250 op=LOAD Jan 28 06:57:25.375000 audit: BPF prog-id=251 op=LOAD Jan 28 06:57:25.375000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4999 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666132653137613035653838353264383937373466353635663438 Jan 28 06:57:25.376000 audit: BPF prog-id=251 op=UNLOAD Jan 28 06:57:25.376000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4999 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666132653137613035653838353264383937373466353635663438 Jan 28 06:57:25.376000 audit: BPF prog-id=252 op=LOAD Jan 28 06:57:25.376000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4999 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666132653137613035653838353264383937373466353635663438 Jan 28 06:57:25.376000 audit: BPF prog-id=253 op=LOAD Jan 28 06:57:25.376000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4999 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666132653137613035653838353264383937373466353635663438 Jan 28 06:57:25.376000 audit: BPF prog-id=253 op=UNLOAD Jan 28 06:57:25.376000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4999 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666132653137613035653838353264383937373466353635663438 Jan 28 06:57:25.377000 audit: BPF prog-id=252 op=UNLOAD Jan 28 06:57:25.377000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4999 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666132653137613035653838353264383937373466353635663438 Jan 28 06:57:25.377000 audit: BPF prog-id=254 op=LOAD Jan 28 06:57:25.377000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4999 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666132653137613035653838353264383937373466353635663438 Jan 28 06:57:25.406672 containerd[1638]: time="2026-01-28T06:57:25.406609104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hjw4,Uid:feae4718-ebbe-416f-b2aa-04c3e4a5379c,Namespace:calico-system,Attempt:0,} returns sandbox id \"cffa2e17a05e8852d89774f565f482e2c970e07ad91c1a1715a165d227d905a7\"" Jan 28 06:57:25.410536 containerd[1638]: time="2026-01-28T06:57:25.410320332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:57:25.654000 audit[5040]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:25.654000 audit[5040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe265541d0 a2=0 a3=7ffe265541bc items=0 ppid=3048 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:25.697000 audit[5040]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:25.697000 audit[5040]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe265541d0 a2=0 a3=7ffe265541bc items=0 ppid=3048 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:25.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:25.711258 containerd[1638]: time="2026-01-28T06:57:25.711197903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:25.712972 containerd[1638]: time="2026-01-28T06:57:25.712906438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:57:25.713441 containerd[1638]: time="2026-01-28T06:57:25.713093988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:25.713683 kubelet[2936]: E0128 06:57:25.713616 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:57:25.714239 kubelet[2936]: E0128 06:57:25.713689 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:57:25.715084 kubelet[2936]: E0128 06:57:25.715007 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:25.718582 containerd[1638]: time="2026-01-28T06:57:25.718280308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:57:25.977913 containerd[1638]: time="2026-01-28T06:57:25.977639542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-bbmbt,Uid:1023d10f-af49-4cbc-b6ed-d31a2d3bba42,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:57:26.028169 containerd[1638]: time="2026-01-28T06:57:26.027851077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:26.030538 containerd[1638]: time="2026-01-28T06:57:26.030164807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:57:26.030810 containerd[1638]: time="2026-01-28T06:57:26.030224507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:26.031479 kubelet[2936]: E0128 06:57:26.031357 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:57:26.031479 kubelet[2936]: E0128 06:57:26.031437 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:57:26.031988 kubelet[2936]: E0128 06:57:26.031631 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:26.033756 kubelet[2936]: E0128 06:57:26.033686 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:57:26.239602 systemd-networkd[1540]: calib0193df7354: Link UP Jan 28 06:57:26.242240 systemd-networkd[1540]: calib0193df7354: Gained carrier Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.100 [INFO][5042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0 calico-apiserver-6bf65c897b- calico-apiserver 1023d10f-af49-4cbc-b6ed-d31a2d3bba42 866 0 2026-01-28 06:56:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf65c897b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gf17r.gb1.brightbox.com calico-apiserver-6bf65c897b-bbmbt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib0193df7354 [] [] }} ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.102 [INFO][5042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.167 [INFO][5056] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" HandleID="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.167 [INFO][5056] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" HandleID="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gf17r.gb1.brightbox.com", "pod":"calico-apiserver-6bf65c897b-bbmbt", "timestamp":"2026-01-28 06:57:26.16729255 +0000 UTC"}, Hostname:"srv-gf17r.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.167 [INFO][5056] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.167 [INFO][5056] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.167 [INFO][5056] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gf17r.gb1.brightbox.com' Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.185 [INFO][5056] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.195 [INFO][5056] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.201 [INFO][5056] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.204 [INFO][5056] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.207 [INFO][5056] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.207 [INFO][5056] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.210 [INFO][5056] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.215 [INFO][5056] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.225 [INFO][5056] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.200/26] block=192.168.38.192/26 handle="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.226 [INFO][5056] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.200/26] handle="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" host="srv-gf17r.gb1.brightbox.com" Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.226 [INFO][5056] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:57:26.273968 containerd[1638]: 2026-01-28 06:57:26.226 [INFO][5056] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.200/26] IPv6=[] ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" HandleID="k8s-pod-network.c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Workload="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" Jan 28 06:57:26.276066 containerd[1638]: 2026-01-28 06:57:26.230 [INFO][5042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0", GenerateName:"calico-apiserver-6bf65c897b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1023d10f-af49-4cbc-b6ed-d31a2d3bba42", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf65c897b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6bf65c897b-bbmbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib0193df7354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:26.276066 containerd[1638]: 2026-01-28 06:57:26.230 [INFO][5042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.200/32] ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" Jan 28 06:57:26.276066 containerd[1638]: 2026-01-28 06:57:26.230 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0193df7354 ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" Jan 28 06:57:26.276066 containerd[1638]: 2026-01-28 06:57:26.244 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" Jan 28 06:57:26.276066 containerd[1638]: 2026-01-28 06:57:26.246 [INFO][5042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0", GenerateName:"calico-apiserver-6bf65c897b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1023d10f-af49-4cbc-b6ed-d31a2d3bba42", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf65c897b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gf17r.gb1.brightbox.com", ContainerID:"c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad", Pod:"calico-apiserver-6bf65c897b-bbmbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib0193df7354", MAC:"c2:80:87:7d:a2:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:57:26.276066 containerd[1638]: 2026-01-28 06:57:26.268 [INFO][5042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" Namespace="calico-apiserver" Pod="calico-apiserver-6bf65c897b-bbmbt" WorkloadEndpoint="srv--gf17r.gb1.brightbox.com-k8s-calico--apiserver--6bf65c897b--bbmbt-eth0" Jan 28 06:57:26.319099 containerd[1638]: time="2026-01-28T06:57:26.318934739Z" level=info msg="connecting to shim c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad" address="unix:///run/containerd/s/c1341b2ac129ff4a12145d6e18490e6a6b473e9c97db5879a689775bcfac7bd7" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:57:26.340000 audit[5078]: NETFILTER_CFG table=filter:143 family=2 entries=53 op=nft_register_chain pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:57:26.340000 audit[5078]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7ffceb246680 a2=0 a3=7ffceb24666c items=0 ppid=4216 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.340000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:57:26.380257 systemd[1]: Started cri-containerd-c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad.scope - libcontainer container c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad. Jan 28 06:57:26.415000 audit: BPF prog-id=255 op=LOAD Jan 28 06:57:26.417000 audit: BPF prog-id=256 op=LOAD Jan 28 06:57:26.417000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306331336662386133356334643130653563356335353561396439 Jan 28 06:57:26.417000 audit: BPF prog-id=256 op=UNLOAD Jan 28 06:57:26.417000 audit[5090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306331336662386133356334643130653563356335353561396439 Jan 28 06:57:26.418000 audit: BPF prog-id=257 op=LOAD Jan 28 06:57:26.418000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306331336662386133356334643130653563356335353561396439 Jan 28 06:57:26.418000 audit: BPF prog-id=258 op=LOAD Jan 28 06:57:26.418000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306331336662386133356334643130653563356335353561396439 Jan 28 06:57:26.418000 audit: BPF prog-id=258 op=UNLOAD Jan 28 06:57:26.418000 audit[5090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306331336662386133356334643130653563356335353561396439 Jan 28 06:57:26.418000 audit: BPF prog-id=257 op=UNLOAD Jan 28 06:57:26.418000 audit[5090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306331336662386133356334643130653563356335353561396439 Jan 28 06:57:26.418000 audit: BPF prog-id=259 op=LOAD Jan 28 06:57:26.418000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5079 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333306331336662386133356334643130653563356335353561396439 Jan 28 06:57:26.485969 kubelet[2936]: E0128 06:57:26.485680 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:57:26.493422 containerd[1638]: time="2026-01-28T06:57:26.493344405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf65c897b-bbmbt,Uid:1023d10f-af49-4cbc-b6ed-d31a2d3bba42,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c30c13fb8a35c4d10e5c5c555a9d9a5caca17943a455af8f715e7d47757b8bad\"" Jan 28 06:57:26.507526 containerd[1638]: time="2026-01-28T06:57:26.507451181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:57:26.817717 containerd[1638]: time="2026-01-28T06:57:26.817533966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:26.819928 containerd[1638]: time="2026-01-28T06:57:26.819703693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:57:26.819928 containerd[1638]: time="2026-01-28T06:57:26.819730367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:26.820398 kubelet[2936]: E0128 06:57:26.820219 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:26.820398 kubelet[2936]: E0128 06:57:26.820322 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:26.821634 kubelet[2936]: E0128 06:57:26.820651 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vrfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:26.822358 kubelet[2936]: E0128 06:57:26.821998 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:57:27.003230 systemd-networkd[1540]: cali8bef59f095c: Gained IPv6LL Jan 28 06:57:27.489039 kubelet[2936]: E0128 06:57:27.488036 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:57:27.527000 audit[5122]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:27.527000 audit[5122]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe525c27b0 a2=0 a3=7ffe525c279c items=0 ppid=3048 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:27.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:27.532000 audit[5122]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:57:27.532000 audit[5122]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe525c27b0 a2=0 a3=7ffe525c279c items=0 ppid=3048 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:27.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:57:27.579606 systemd-networkd[1540]: calib0193df7354: Gained IPv6LL Jan 28 06:57:28.498912 kubelet[2936]: E0128 06:57:28.498783 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:57:31.979119 containerd[1638]: time="2026-01-28T06:57:31.978911971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:57:32.297940 containerd[1638]: time="2026-01-28T06:57:32.297678314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:32.299564 containerd[1638]: time="2026-01-28T06:57:32.299454221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:57:32.299564 containerd[1638]: time="2026-01-28T06:57:32.299528972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:32.300299 kubelet[2936]: E0128 06:57:32.300106 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:57:32.300299 kubelet[2936]: E0128 06:57:32.300183 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:57:32.301763 kubelet[2936]: E0128 06:57:32.301703 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:577d02453e684fc79960ed8b50e8d722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:32.304257 containerd[1638]: time="2026-01-28T06:57:32.304192952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:57:32.610266 containerd[1638]: time="2026-01-28T06:57:32.610052470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:32.611376 containerd[1638]: time="2026-01-28T06:57:32.611290267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:57:32.611376 containerd[1638]: time="2026-01-28T06:57:32.611339682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:32.611751 kubelet[2936]: E0128 06:57:32.611680 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:57:32.611840 kubelet[2936]: E0128 06:57:32.611757 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:57:32.612067 kubelet[2936]: E0128 06:57:32.612006 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:32.614024 kubelet[2936]: E0128 06:57:32.613972 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:57:35.977817 containerd[1638]: time="2026-01-28T06:57:35.977702533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:57:36.305973 containerd[1638]: time="2026-01-28T06:57:36.305859531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:36.307133 containerd[1638]: time="2026-01-28T06:57:36.307085268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:57:36.307259 containerd[1638]: time="2026-01-28T06:57:36.307202923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:36.308218 kubelet[2936]: E0128 06:57:36.307508 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:57:36.308218 kubelet[2936]: E0128 06:57:36.307598 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:57:36.308218 kubelet[2936]: E0128 06:57:36.307819 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr5hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:36.309123 kubelet[2936]: E0128 06:57:36.309041 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:57:37.979993 containerd[1638]: time="2026-01-28T06:57:37.979548085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:57:38.288874 containerd[1638]: time="2026-01-28T06:57:38.288777502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:38.290230 containerd[1638]: time="2026-01-28T06:57:38.290175050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:57:38.290339 containerd[1638]: time="2026-01-28T06:57:38.290313490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:38.290992 kubelet[2936]: E0128 06:57:38.290744 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:57:38.290992 kubelet[2936]: E0128 06:57:38.290830 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:57:38.291857 kubelet[2936]: E0128 06:57:38.291128 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6hr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:38.293103 kubelet[2936]: E0128 06:57:38.292986 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:57:38.981629 containerd[1638]: time="2026-01-28T06:57:38.980856056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:57:39.310909 containerd[1638]: time="2026-01-28T06:57:39.310405292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:39.312036 containerd[1638]: time="2026-01-28T06:57:39.311985597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:57:39.312295 containerd[1638]: time="2026-01-28T06:57:39.312259552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:39.313008 kubelet[2936]: E0128 06:57:39.312760 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:57:39.313008 kubelet[2936]: E0128 06:57:39.312843 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:57:39.314529 kubelet[2936]: E0128 06:57:39.313910 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:39.315070 containerd[1638]: time="2026-01-28T06:57:39.315010193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:57:39.640574 containerd[1638]: time="2026-01-28T06:57:39.639900279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:39.641477 containerd[1638]: time="2026-01-28T06:57:39.641422105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:57:39.641566 containerd[1638]: time="2026-01-28T06:57:39.641546250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:39.643004 kubelet[2936]: E0128 06:57:39.642583 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:39.643004 kubelet[2936]: E0128 06:57:39.642662 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:39.643583 containerd[1638]: time="2026-01-28T06:57:39.643433601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:57:39.644522 kubelet[2936]: E0128 06:57:39.644172 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:39.647616 kubelet[2936]: E0128 06:57:39.647061 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:57:39.962666 containerd[1638]: time="2026-01-28T06:57:39.962393266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:39.964046 containerd[1638]: time="2026-01-28T06:57:39.963935503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:57:39.964202 containerd[1638]: time="2026-01-28T06:57:39.963983477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:39.964805 kubelet[2936]: E0128 06:57:39.964363 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:57:39.964805 kubelet[2936]: E0128 06:57:39.964438 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:57:39.964805 kubelet[2936]: E0128 06:57:39.964661 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:39.966621 kubelet[2936]: E0128 06:57:39.966562 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:57:39.977158 containerd[1638]: time="2026-01-28T06:57:39.977097658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:57:40.289259 containerd[1638]: time="2026-01-28T06:57:40.289101027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:40.291273 containerd[1638]: time="2026-01-28T06:57:40.290977365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:40.291927 containerd[1638]: time="2026-01-28T06:57:40.291491904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:57:40.292077 kubelet[2936]: E0128 06:57:40.291878 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:40.292077 kubelet[2936]: E0128 06:57:40.291981 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:57:40.294606 kubelet[2936]: E0128 06:57:40.292199 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vrfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:40.296823 kubelet[2936]: E0128 06:57:40.296725 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:57:44.982450 kubelet[2936]: E0128 06:57:44.982182 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:57:46.979601 kubelet[2936]: E0128 06:57:46.979361 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:57:51.983204 kubelet[2936]: E0128 06:57:51.982561 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:57:51.989608 kubelet[2936]: E0128 06:57:51.989410 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:57:52.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.31.94:22-20.161.92.111:38594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:57:52.147615 kernel: kauditd_printk_skb: 217 callbacks suppressed Jan 28 06:57:52.147831 kernel: audit: type=1130 audit(1769583472.112:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.31.94:22-20.161.92.111:38594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:57:52.113689 systemd[1]: Started sshd@7-10.230.31.94:22-20.161.92.111:38594.service - OpenSSH per-connection server daemon (20.161.92.111:38594). Jan 28 06:57:52.728000 audit[5167]: USER_ACCT pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.743539 kernel: audit: type=1101 audit(1769583472.728:753): pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.746418 sshd[5167]: Accepted publickey for core from 20.161.92.111 port 38594 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:57:52.757415 kernel: audit: type=1103 audit(1769583472.748:754): pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.748000 audit[5167]: CRED_ACQ pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.754167 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:57:52.772214 kernel: audit: type=1006 audit(1769583472.751:755): pid=5167 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 06:57:52.772314 kernel: audit: type=1300 audit(1769583472.751:755): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe65fdce20 a2=3 a3=0 items=0 ppid=1 pid=5167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:52.751000 audit[5167]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe65fdce20 a2=3 a3=0 items=0 ppid=1 pid=5167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:52.779278 systemd-logind[1613]: New session 11 of user core. Jan 28 06:57:52.751000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:57:52.784975 kernel: audit: type=1327 audit(1769583472.751:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:57:52.785337 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 06:57:52.793000 audit[5167]: USER_START pid=5167 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.801035 kernel: audit: type=1105 audit(1769583472.793:756): pid=5167 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.800000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.807120 kernel: audit: type=1103 audit(1769583472.800:757): pid=5175 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:52.984361 kubelet[2936]: E0128 06:57:52.983033 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:57:52.984361 kubelet[2936]: E0128 06:57:52.984366 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:57:53.687031 sshd[5175]: Connection closed by 20.161.92.111 port 38594 Jan 28 06:57:53.689246 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Jan 28 06:57:53.697000 audit[5167]: USER_END pid=5167 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:53.719247 kernel: audit: type=1106 audit(1769583473.697:758): pid=5167 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:53.719396 kernel: audit: type=1104 audit(1769583473.697:759): pid=5167 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:53.697000 audit[5167]: CRED_DISP pid=5167 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:53.731336 systemd[1]: sshd@7-10.230.31.94:22-20.161.92.111:38594.service: Deactivated successfully. Jan 28 06:57:53.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.31.94:22-20.161.92.111:38594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:57:53.735605 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 06:57:53.738376 systemd-logind[1613]: Session 11 logged out. Waiting for processes to exit. Jan 28 06:57:53.741196 systemd-logind[1613]: Removed session 11. Jan 28 06:57:57.982008 containerd[1638]: time="2026-01-28T06:57:57.980889438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:57:58.306986 containerd[1638]: time="2026-01-28T06:57:58.306566543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:58.308596 containerd[1638]: time="2026-01-28T06:57:58.308407850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:57:58.308596 containerd[1638]: time="2026-01-28T06:57:58.308541742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:58.309595 kubelet[2936]: E0128 06:57:58.309032 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:57:58.309595 kubelet[2936]: E0128 06:57:58.309142 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:57:58.309595 kubelet[2936]: E0128 06:57:58.309425 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:577d02453e684fc79960ed8b50e8d722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:58.312179 containerd[1638]: time="2026-01-28T06:57:58.312133850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:57:58.615905 containerd[1638]: time="2026-01-28T06:57:58.615502251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:57:58.618090 containerd[1638]: time="2026-01-28T06:57:58.616914509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:57:58.618090 containerd[1638]: time="2026-01-28T06:57:58.617001158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:57:58.618544 kubelet[2936]: E0128 06:57:58.618470 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:57:58.618668 kubelet[2936]: E0128 06:57:58.618554 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:57:58.618816 kubelet[2936]: E0128 06:57:58.618760 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:57:58.620248 kubelet[2936]: E0128 06:57:58.620144 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:57:58.794010 systemd[1]: Started sshd@8-10.230.31.94:22-20.161.92.111:51876.service - OpenSSH per-connection server daemon (20.161.92.111:51876). Jan 28 06:57:58.811772 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:57:58.811998 kernel: audit: type=1130 audit(1769583478.794:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.31.94:22-20.161.92.111:51876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:57:58.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.31.94:22-20.161.92.111:51876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:57:59.349000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.357290 sshd[5199]: Accepted publickey for core from 20.161.92.111 port 51876 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:57:59.360211 kernel: audit: type=1101 audit(1769583479.349:762): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.363000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.368639 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:57:59.373016 kernel: audit: type=1103 audit(1769583479.363:763): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.380016 kernel: audit: type=1006 audit(1769583479.363:764): pid=5199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 28 06:57:59.363000 audit[5199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd78f051a0 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:59.384546 systemd-logind[1613]: New session 12 of user core. Jan 28 06:57:59.387236 kernel: audit: type=1300 audit(1769583479.363:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd78f051a0 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:57:59.387321 kernel: audit: type=1327 audit(1769583479.363:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:57:59.363000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:57:59.393761 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 06:57:59.400000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.409027 kernel: audit: type=1105 audit(1769583479.400:765): pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.410174 kernel: audit: type=1103 audit(1769583479.408:766): pid=5203 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.408000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.815648 sshd[5203]: Connection closed by 20.161.92.111 port 51876 Jan 28 06:57:59.816535 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 28 06:57:59.819000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.832671 kernel: audit: type=1106 audit(1769583479.819:767): pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.834897 systemd[1]: sshd@8-10.230.31.94:22-20.161.92.111:51876.service: Deactivated successfully. Jan 28 06:57:59.819000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.842003 kernel: audit: type=1104 audit(1769583479.819:768): pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:57:59.844773 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 06:57:59.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.31.94:22-20.161.92.111:51876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:57:59.849772 systemd-logind[1613]: Session 12 logged out. Waiting for processes to exit. Jan 28 06:57:59.851630 systemd-logind[1613]: Removed session 12. Jan 28 06:58:00.986030 containerd[1638]: time="2026-01-28T06:58:00.984785620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:58:01.304166 containerd[1638]: time="2026-01-28T06:58:01.304053640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:01.306106 containerd[1638]: time="2026-01-28T06:58:01.305788256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:58:01.306106 containerd[1638]: time="2026-01-28T06:58:01.305922568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:01.307522 kubelet[2936]: E0128 06:58:01.306578 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:58:01.307522 kubelet[2936]: E0128 06:58:01.306674 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:58:01.318576 kubelet[2936]: E0128 06:58:01.318391 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr5hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:01.320701 kubelet[2936]: E0128 06:58:01.320622 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:58:02.983990 containerd[1638]: time="2026-01-28T06:58:02.983870421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:58:03.312125 containerd[1638]: time="2026-01-28T06:58:03.312011265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:03.313493 containerd[1638]: time="2026-01-28T06:58:03.313437322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:58:03.313577 containerd[1638]: time="2026-01-28T06:58:03.313543470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:03.314339 kubelet[2936]: E0128 06:58:03.313879 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:58:03.314339 kubelet[2936]: E0128 06:58:03.313997 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:58:03.314339 kubelet[2936]: E0128 06:58:03.314231 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6hr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:03.316354 kubelet[2936]: E0128 06:58:03.316286 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:58:04.945237 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:58:04.945478 kernel: audit: type=1130 audit(1769583484.920:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.31.94:22-20.161.92.111:33770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:04.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.31.94:22-20.161.92.111:33770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:04.921557 systemd[1]: Started sshd@9-10.230.31.94:22-20.161.92.111:33770.service - OpenSSH per-connection server daemon (20.161.92.111:33770). Jan 28 06:58:05.468000 audit[5215]: USER_ACCT pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.470321 sshd[5215]: Accepted publickey for core from 20.161.92.111 port 33770 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:05.473155 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:05.474977 kernel: audit: type=1101 audit(1769583485.468:771): pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.470000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.483049 kernel: audit: type=1103 audit(1769583485.470:772): pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.489985 kernel: audit: type=1006 audit(1769583485.470:773): pid=5215 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 06:58:05.470000 audit[5215]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5b740f70 a2=3 a3=0 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:05.495989 kernel: audit: type=1300 audit(1769583485.470:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5b740f70 a2=3 a3=0 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:05.470000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:05.503970 kernel: audit: type=1327 audit(1769583485.470:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:05.501818 systemd-logind[1613]: New session 13 of user core. Jan 28 06:58:05.510330 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 06:58:05.516000 audit[5215]: USER_START pid=5215 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.524032 kernel: audit: type=1105 audit(1769583485.516:774): pid=5215 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.524000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.531556 kernel: audit: type=1103 audit(1769583485.524:775): pid=5219 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.907480 sshd[5219]: Connection closed by 20.161.92.111 port 33770 Jan 28 06:58:05.909008 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:05.912000 audit[5215]: USER_END pid=5215 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.924272 kernel: audit: type=1106 audit(1769583485.912:776): pid=5215 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.930285 systemd[1]: sshd@9-10.230.31.94:22-20.161.92.111:33770.service: Deactivated successfully. Jan 28 06:58:05.912000 audit[5215]: CRED_DISP pid=5215 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.935971 kernel: audit: type=1104 audit(1769583485.912:777): pid=5215 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:05.936446 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 06:58:05.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.31.94:22-20.161.92.111:33770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:05.941221 systemd-logind[1613]: Session 13 logged out. Waiting for processes to exit. Jan 28 06:58:05.945315 systemd-logind[1613]: Removed session 13. Jan 28 06:58:05.982275 containerd[1638]: time="2026-01-28T06:58:05.982195373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:58:06.296582 containerd[1638]: time="2026-01-28T06:58:06.296459339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:06.298001 containerd[1638]: time="2026-01-28T06:58:06.297850424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:58:06.298084 containerd[1638]: time="2026-01-28T06:58:06.298029499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:06.298466 kubelet[2936]: E0128 06:58:06.298374 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:58:06.299225 kubelet[2936]: E0128 06:58:06.298482 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:58:06.299225 kubelet[2936]: E0128 06:58:06.298743 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:06.302183 containerd[1638]: time="2026-01-28T06:58:06.302130361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:58:06.622471 containerd[1638]: time="2026-01-28T06:58:06.621768259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:06.624133 containerd[1638]: time="2026-01-28T06:58:06.624041732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:58:06.624219 containerd[1638]: time="2026-01-28T06:58:06.624167595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:06.624500 kubelet[2936]: E0128 06:58:06.624324 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:58:06.624500 kubelet[2936]: E0128 06:58:06.624393 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:58:06.624651 kubelet[2936]: E0128 06:58:06.624570 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:06.626486 kubelet[2936]: E0128 06:58:06.625746 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:58:06.981617 containerd[1638]: time="2026-01-28T06:58:06.980554617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:58:07.319572 containerd[1638]: time="2026-01-28T06:58:07.319282681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:07.320788 containerd[1638]: time="2026-01-28T06:58:07.320598381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:58:07.320788 containerd[1638]: time="2026-01-28T06:58:07.320735188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:07.321389 kubelet[2936]: E0128 06:58:07.321266 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:07.321389 kubelet[2936]: E0128 06:58:07.321355 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:07.322759 kubelet[2936]: E0128 06:58:07.322474 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vrfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:07.322983 containerd[1638]: time="2026-01-28T06:58:07.322783745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:58:07.324004 kubelet[2936]: E0128 06:58:07.323923 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:58:07.642524 containerd[1638]: time="2026-01-28T06:58:07.641286347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:07.643600 containerd[1638]: time="2026-01-28T06:58:07.643392466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:58:07.643600 containerd[1638]: time="2026-01-28T06:58:07.643530428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:07.644659 kubelet[2936]: E0128 06:58:07.644050 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:07.644659 kubelet[2936]: E0128 06:58:07.644146 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:07.644659 kubelet[2936]: E0128 06:58:07.644366 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:07.647055 kubelet[2936]: E0128 06:58:07.646997 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:58:09.979705 kubelet[2936]: E0128 06:58:09.979618 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:58:11.011478 systemd[1]: Started sshd@10-10.230.31.94:22-20.161.92.111:33786.service - OpenSSH per-connection server daemon (20.161.92.111:33786). Jan 28 06:58:11.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.31.94:22-20.161.92.111:33786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:11.017896 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:58:11.018017 kernel: audit: type=1130 audit(1769583491.011:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.31.94:22-20.161.92.111:33786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:11.535000 audit[5235]: USER_ACCT pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.550993 kernel: audit: type=1101 audit(1769583491.535:780): pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.551512 sshd[5235]: Accepted publickey for core from 20.161.92.111 port 33786 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:11.558495 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:11.555000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.565998 kernel: audit: type=1103 audit(1769583491.555:781): pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.577592 kernel: audit: type=1006 audit(1769583491.555:782): pid=5235 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 28 06:58:11.577784 kernel: audit: type=1300 audit(1769583491.555:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6ccf2060 a2=3 a3=0 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:11.555000 audit[5235]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6ccf2060 a2=3 a3=0 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:11.555000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:11.587381 kernel: audit: type=1327 audit(1769583491.555:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:11.586613 systemd-logind[1613]: New session 14 of user core. Jan 28 06:58:11.599447 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 06:58:11.606000 audit[5235]: USER_START pid=5235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.614984 kernel: audit: type=1105 audit(1769583491.606:783): pid=5235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.614000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.622043 kernel: audit: type=1103 audit(1769583491.614:784): pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.976748 sshd[5239]: Connection closed by 20.161.92.111 port 33786 Jan 28 06:58:11.976880 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:11.992997 kernel: audit: type=1106 audit(1769583491.980:785): pid=5235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.980000 audit[5235]: USER_END pid=5235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:11.980000 audit[5235]: CRED_DISP pid=5235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:12.005107 kernel: audit: type=1104 audit(1769583491.980:786): pid=5235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:12.000127 systemd[1]: sshd@10-10.230.31.94:22-20.161.92.111:33786.service: Deactivated successfully. Jan 28 06:58:12.004591 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 06:58:12.007086 systemd-logind[1613]: Session 14 logged out. Waiting for processes to exit. Jan 28 06:58:12.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.31.94:22-20.161.92.111:33786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:12.009001 systemd-logind[1613]: Removed session 14. Jan 28 06:58:12.080564 systemd[1]: Started sshd@11-10.230.31.94:22-20.161.92.111:33794.service - OpenSSH per-connection server daemon (20.161.92.111:33794). Jan 28 06:58:12.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.31.94:22-20.161.92.111:33794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:12.589000 audit[5252]: USER_ACCT pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:12.590987 sshd[5252]: Accepted publickey for core from 20.161.92.111 port 33794 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:12.591000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:12.591000 audit[5252]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbb73a280 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:12.591000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:12.593692 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:12.604172 systemd-logind[1613]: New session 15 of user core. Jan 28 06:58:12.613251 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 06:58:12.619000 audit[5252]: USER_START pid=5252 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:12.623000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:12.982148 kubelet[2936]: E0128 06:58:12.981146 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:58:13.085416 sshd[5256]: Connection closed by 20.161.92.111 port 33794 Jan 28 06:58:13.086441 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:13.089000 audit[5252]: USER_END pid=5252 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:13.089000 audit[5252]: CRED_DISP pid=5252 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:13.096430 systemd[1]: sshd@11-10.230.31.94:22-20.161.92.111:33794.service: Deactivated successfully. Jan 28 06:58:13.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.31.94:22-20.161.92.111:33794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:13.106430 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 06:58:13.111871 systemd-logind[1613]: Session 15 logged out. Waiting for processes to exit. Jan 28 06:58:13.115370 systemd-logind[1613]: Removed session 15. Jan 28 06:58:13.193591 systemd[1]: Started sshd@12-10.230.31.94:22-20.161.92.111:44278.service - OpenSSH per-connection server daemon (20.161.92.111:44278). Jan 28 06:58:13.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.31.94:22-20.161.92.111:44278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:13.716000 audit[5266]: USER_ACCT pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:13.717712 sshd[5266]: Accepted publickey for core from 20.161.92.111 port 44278 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:13.718000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:13.719000 audit[5266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea2ab2fe0 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:13.719000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:13.722507 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:13.738213 systemd-logind[1613]: New session 16 of user core. Jan 28 06:58:13.748424 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 06:58:13.757000 audit[5266]: USER_START pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:13.764000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:14.131059 sshd[5270]: Connection closed by 20.161.92.111 port 44278 Jan 28 06:58:14.133266 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:14.139000 audit[5266]: USER_END pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:14.140000 audit[5266]: CRED_DISP pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:14.145554 systemd[1]: sshd@12-10.230.31.94:22-20.161.92.111:44278.service: Deactivated successfully. Jan 28 06:58:14.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.31.94:22-20.161.92.111:44278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:14.154026 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 06:58:14.157304 systemd-logind[1613]: Session 16 logged out. Waiting for processes to exit. Jan 28 06:58:14.159496 systemd-logind[1613]: Removed session 16. Jan 28 06:58:16.982693 kubelet[2936]: E0128 06:58:16.982358 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:58:17.982111 kubelet[2936]: E0128 06:58:17.982032 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:58:19.238045 systemd[1]: Started sshd@13-10.230.31.94:22-20.161.92.111:44286.service - OpenSSH per-connection server daemon (20.161.92.111:44286). Jan 28 06:58:19.243567 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 06:58:19.244297 kernel: audit: type=1130 audit(1769583499.237:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.31.94:22-20.161.92.111:44286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:19.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.31.94:22-20.161.92.111:44286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:19.815000 audit[5309]: USER_ACCT pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:19.821991 sshd[5309]: Accepted publickey for core from 20.161.92.111 port 44286 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:19.824466 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:19.825210 kernel: audit: type=1101 audit(1769583499.815:807): pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:19.820000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:19.837092 kernel: audit: type=1103 audit(1769583499.820:808): pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:19.844277 systemd-logind[1613]: New session 17 of user core. Jan 28 06:58:19.849980 kernel: audit: type=1006 audit(1769583499.820:809): pid=5309 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 28 06:58:19.820000 audit[5309]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfce41520 a2=3 a3=0 items=0 ppid=1 pid=5309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:19.857973 kernel: audit: type=1300 audit(1769583499.820:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfce41520 a2=3 a3=0 items=0 ppid=1 pid=5309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:19.858590 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 06:58:19.820000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:19.866971 kernel: audit: type=1327 audit(1769583499.820:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:19.866000 audit[5309]: USER_START pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:19.874065 kernel: audit: type=1105 audit(1769583499.866:810): pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:19.874000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:19.882030 kernel: audit: type=1103 audit(1769583499.874:811): pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:20.307787 sshd[5313]: Connection closed by 20.161.92.111 port 44286 Jan 28 06:58:20.308855 sshd-session[5309]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:20.312000 audit[5309]: USER_END pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:20.323428 kernel: audit: type=1106 audit(1769583500.312:812): pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:20.324062 systemd[1]: sshd@13-10.230.31.94:22-20.161.92.111:44286.service: Deactivated successfully. Jan 28 06:58:20.329010 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 06:58:20.315000 audit[5309]: CRED_DISP pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:20.336981 kernel: audit: type=1104 audit(1769583500.315:813): pid=5309 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:20.338106 systemd-logind[1613]: Session 17 logged out. Waiting for processes to exit. Jan 28 06:58:20.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.31.94:22-20.161.92.111:44286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:20.342112 systemd-logind[1613]: Removed session 17. Jan 28 06:58:20.984264 kubelet[2936]: E0128 06:58:20.980262 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:58:20.984264 kubelet[2936]: E0128 06:58:20.982310 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:58:20.984264 kubelet[2936]: E0128 06:58:20.982543 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:58:25.413155 systemd[1]: Started sshd@14-10.230.31.94:22-20.161.92.111:35868.service - OpenSSH per-connection server daemon (20.161.92.111:35868). Jan 28 06:58:25.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.31.94:22-20.161.92.111:35868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:25.416473 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:58:25.416583 kernel: audit: type=1130 audit(1769583505.412:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.31.94:22-20.161.92.111:35868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:25.986000 audit[5333]: USER_ACCT pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:25.990315 sshd[5333]: Accepted publickey for core from 20.161.92.111 port 35868 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:25.994089 kernel: audit: type=1101 audit(1769583505.986:816): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:25.994372 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:25.988000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.002311 kernel: audit: type=1103 audit(1769583505.988:817): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.012473 kernel: audit: type=1006 audit(1769583505.988:818): pid=5333 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 28 06:58:26.013103 kernel: audit: type=1300 audit(1769583505.988:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca73cb840 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:25.988000 audit[5333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca73cb840 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:25.988000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:26.017417 kernel: audit: type=1327 audit(1769583505.988:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:26.022396 systemd-logind[1613]: New session 18 of user core. Jan 28 06:58:26.029661 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 06:58:26.037000 audit[5333]: USER_START pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.046999 kernel: audit: type=1105 audit(1769583506.037:819): pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.047000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.053995 kernel: audit: type=1103 audit(1769583506.047:820): pid=5337 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.439713 sshd[5337]: Connection closed by 20.161.92.111 port 35868 Jan 28 06:58:26.440662 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:26.446000 audit[5333]: USER_END pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.456052 kernel: audit: type=1106 audit(1769583506.446:821): pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.457921 systemd[1]: sshd@14-10.230.31.94:22-20.161.92.111:35868.service: Deactivated successfully. Jan 28 06:58:26.467515 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 06:58:26.476278 kernel: audit: type=1104 audit(1769583506.449:822): pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.449000 audit[5333]: CRED_DISP pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:26.470151 systemd-logind[1613]: Session 18 logged out. Waiting for processes to exit. Jan 28 06:58:26.481394 systemd-logind[1613]: Removed session 18. Jan 28 06:58:26.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.31.94:22-20.161.92.111:35868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:26.979408 kubelet[2936]: E0128 06:58:26.978906 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:58:29.979218 kubelet[2936]: E0128 06:58:29.979145 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:58:31.566977 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:58:31.567898 kernel: audit: type=1130 audit(1769583511.542:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.31.94:22-20.161.92.111:35870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:31.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.31.94:22-20.161.92.111:35870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:31.543402 systemd[1]: Started sshd@15-10.230.31.94:22-20.161.92.111:35870.service - OpenSSH per-connection server daemon (20.161.92.111:35870). Jan 28 06:58:31.979685 kubelet[2936]: E0128 06:58:31.979439 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:58:32.098000 audit[5349]: USER_ACCT pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.109938 sshd[5349]: Accepted publickey for core from 20.161.92.111 port 35870 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:32.112142 kernel: audit: type=1101 audit(1769583512.098:825): pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.110000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.113835 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:32.119008 kernel: audit: type=1103 audit(1769583512.110:826): pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.124659 kernel: audit: type=1006 audit(1769583512.111:827): pid=5349 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 06:58:32.111000 audit[5349]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe8d37490 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:32.133566 kernel: audit: type=1300 audit(1769583512.111:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe8d37490 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:32.134289 kernel: audit: type=1327 audit(1769583512.111:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:32.111000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:32.142313 systemd-logind[1613]: New session 19 of user core. Jan 28 06:58:32.148416 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 06:58:32.156000 audit[5349]: USER_START pid=5349 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.167297 kernel: audit: type=1105 audit(1769583512.156:828): pid=5349 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.167436 kernel: audit: type=1103 audit(1769583512.163:829): pid=5353 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.163000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.532619 sshd[5353]: Connection closed by 20.161.92.111 port 35870 Jan 28 06:58:32.533120 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:32.537000 audit[5349]: USER_END pid=5349 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.550121 kernel: audit: type=1106 audit(1769583512.537:830): pid=5349 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.543000 audit[5349]: CRED_DISP pid=5349 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.555127 systemd[1]: sshd@15-10.230.31.94:22-20.161.92.111:35870.service: Deactivated successfully. Jan 28 06:58:32.556639 kernel: audit: type=1104 audit(1769583512.543:831): pid=5349 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:32.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.31.94:22-20.161.92.111:35870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:32.562668 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 06:58:32.566326 systemd-logind[1613]: Session 19 logged out. Waiting for processes to exit. Jan 28 06:58:32.572238 systemd-logind[1613]: Removed session 19. Jan 28 06:58:32.984349 kubelet[2936]: E0128 06:58:32.984046 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:58:33.977974 kubelet[2936]: E0128 06:58:33.977802 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:58:35.980896 kubelet[2936]: E0128 06:58:35.980814 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:58:37.638743 systemd[1]: Started sshd@16-10.230.31.94:22-20.161.92.111:56050.service - OpenSSH per-connection server daemon (20.161.92.111:56050). Jan 28 06:58:37.651087 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:58:37.651249 kernel: audit: type=1130 audit(1769583517.637:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.31.94:22-20.161.92.111:56050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:37.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.31.94:22-20.161.92.111:56050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:38.169000 audit[5372]: USER_ACCT pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.181853 kernel: audit: type=1101 audit(1769583518.169:834): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.181956 sshd[5372]: Accepted publickey for core from 20.161.92.111 port 56050 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:38.182000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.184728 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:38.188981 kernel: audit: type=1103 audit(1769583518.182:835): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.196977 kernel: audit: type=1006 audit(1769583518.182:836): pid=5372 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 28 06:58:38.200697 systemd-logind[1613]: New session 20 of user core. Jan 28 06:58:38.182000 audit[5372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa84ecd90 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:38.214075 kernel: audit: type=1300 audit(1769583518.182:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa84ecd90 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:38.214229 kernel: audit: type=1327 audit(1769583518.182:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:38.182000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:38.218548 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 06:58:38.227000 audit[5372]: USER_START pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.237970 kernel: audit: type=1105 audit(1769583518.227:837): pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.234000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.244998 kernel: audit: type=1103 audit(1769583518.234:838): pid=5376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.638032 sshd[5376]: Connection closed by 20.161.92.111 port 56050 Jan 28 06:58:38.640709 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:38.649000 audit[5372]: USER_END pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.658580 systemd[1]: sshd@16-10.230.31.94:22-20.161.92.111:56050.service: Deactivated successfully. Jan 28 06:58:38.662103 kernel: audit: type=1106 audit(1769583518.649:839): pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.662206 kernel: audit: type=1104 audit(1769583518.649:840): pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.649000 audit[5372]: CRED_DISP pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:38.661512 systemd-logind[1613]: Session 20 logged out. Waiting for processes to exit. Jan 28 06:58:38.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.31.94:22-20.161.92.111:56050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:38.666928 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 06:58:38.675584 systemd-logind[1613]: Removed session 20. Jan 28 06:58:38.740671 systemd[1]: Started sshd@17-10.230.31.94:22-20.161.92.111:56054.service - OpenSSH per-connection server daemon (20.161.92.111:56054). Jan 28 06:58:38.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.31.94:22-20.161.92.111:56054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:39.905000 audit[5388]: USER_ACCT pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:39.908380 sshd[5388]: Accepted publickey for core from 20.161.92.111 port 56054 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:39.908000 audit[5388]: CRED_ACQ pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:39.909000 audit[5388]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0c0871d0 a2=3 a3=0 items=0 ppid=1 pid=5388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:39.909000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:39.912358 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:39.926410 systemd-logind[1613]: New session 21 of user core. Jan 28 06:58:39.932571 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 06:58:39.940000 audit[5388]: USER_START pid=5388 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:39.945000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:40.668121 sshd[5392]: Connection closed by 20.161.92.111 port 56054 Jan 28 06:58:40.670614 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:40.677000 audit[5388]: USER_END pid=5388 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:40.678000 audit[5388]: CRED_DISP pid=5388 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:40.685408 systemd-logind[1613]: Session 21 logged out. Waiting for processes to exit. Jan 28 06:58:40.687345 systemd[1]: sshd@17-10.230.31.94:22-20.161.92.111:56054.service: Deactivated successfully. Jan 28 06:58:40.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.31.94:22-20.161.92.111:56054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:40.692301 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 06:58:40.696728 systemd-logind[1613]: Removed session 21. Jan 28 06:58:40.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.31.94:22-20.161.92.111:56062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:40.774198 systemd[1]: Started sshd@18-10.230.31.94:22-20.161.92.111:56062.service - OpenSSH per-connection server daemon (20.161.92.111:56062). Jan 28 06:58:40.981753 kubelet[2936]: E0128 06:58:40.981406 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:58:40.984474 kubelet[2936]: E0128 06:58:40.982374 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:58:41.361000 audit[5402]: USER_ACCT pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:41.363769 sshd[5402]: Accepted publickey for core from 20.161.92.111 port 56062 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:41.365000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:41.365000 audit[5402]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefcf17860 a2=3 a3=0 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:41.365000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:41.368679 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:41.383750 systemd-logind[1613]: New session 22 of user core. Jan 28 06:58:41.390668 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 06:58:41.395000 audit[5402]: USER_START pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:41.399000 audit[5406]: CRED_ACQ pid=5406 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:42.748673 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 28 06:58:42.754769 kernel: audit: type=1325 audit(1769583522.736:857): table=filter:146 family=2 entries=26 op=nft_register_rule pid=5416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:42.736000 audit[5416]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:42.736000 audit[5416]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc86d7c8e0 a2=0 a3=7ffc86d7c8cc items=0 ppid=3048 pid=5416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:42.763009 kernel: audit: type=1300 audit(1769583522.736:857): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc86d7c8e0 a2=0 a3=7ffc86d7c8cc items=0 ppid=3048 pid=5416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:42.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:42.768986 kernel: audit: type=1327 audit(1769583522.736:857): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:42.777000 audit[5416]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:42.781970 kernel: audit: type=1325 audit(1769583522.777:858): table=nat:147 family=2 entries=20 op=nft_register_rule pid=5416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:42.777000 audit[5416]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc86d7c8e0 a2=0 a3=0 items=0 ppid=3048 pid=5416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:42.787990 kernel: audit: type=1300 audit(1769583522.777:858): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc86d7c8e0 a2=0 a3=0 items=0 ppid=3048 pid=5416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:42.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:42.792993 sshd[5406]: Connection closed by 20.161.92.111 port 56062 Jan 28 06:58:42.794865 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:42.795296 kernel: audit: type=1327 audit(1769583522.777:858): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:42.797000 audit[5402]: USER_END pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:42.805981 kernel: audit: type=1106 audit(1769583522.797:859): pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:42.800000 audit[5402]: CRED_DISP pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:42.807338 systemd-logind[1613]: Session 22 logged out. Waiting for processes to exit. Jan 28 06:58:42.808738 systemd[1]: sshd@18-10.230.31.94:22-20.161.92.111:56062.service: Deactivated successfully. Jan 28 06:58:42.812210 kernel: audit: type=1104 audit(1769583522.800:860): pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:42.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.31.94:22-20.161.92.111:56062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:42.817201 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 06:58:42.819977 kernel: audit: type=1131 audit(1769583522.808:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.31.94:22-20.161.92.111:56062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:42.821074 systemd-logind[1613]: Removed session 22. Jan 28 06:58:42.844000 audit[5421]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:42.849041 kernel: audit: type=1325 audit(1769583522.844:862): table=filter:148 family=2 entries=38 op=nft_register_rule pid=5421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:42.844000 audit[5421]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffb6455350 a2=0 a3=7fffb645533c items=0 ppid=3048 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:42.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:42.852000 audit[5421]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:42.852000 audit[5421]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffb6455350 a2=0 a3=0 items=0 ppid=3048 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:42.852000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:42.895514 systemd[1]: Started sshd@19-10.230.31.94:22-20.161.92.111:42664.service - OpenSSH per-connection server daemon (20.161.92.111:42664). Jan 28 06:58:42.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.31.94:22-20.161.92.111:42664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:43.437000 audit[5423]: USER_ACCT pid=5423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:43.440624 sshd[5423]: Accepted publickey for core from 20.161.92.111 port 42664 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:43.440000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:43.441000 audit[5423]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd23f9b10 a2=3 a3=0 items=0 ppid=1 pid=5423 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:43.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:43.443305 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:43.454336 systemd-logind[1613]: New session 23 of user core. Jan 28 06:58:43.463337 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 06:58:43.472000 audit[5423]: USER_START pid=5423 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:43.476000 audit[5427]: CRED_ACQ pid=5427 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:44.214667 sshd[5427]: Connection closed by 20.161.92.111 port 42664 Jan 28 06:58:44.216360 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:44.221000 audit[5423]: USER_END pid=5423 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:44.221000 audit[5423]: CRED_DISP pid=5423 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:44.227659 systemd-logind[1613]: Session 23 logged out. Waiting for processes to exit. Jan 28 06:58:44.229707 systemd[1]: sshd@19-10.230.31.94:22-20.161.92.111:42664.service: Deactivated successfully. Jan 28 06:58:44.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.31.94:22-20.161.92.111:42664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:44.233925 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 06:58:44.238078 systemd-logind[1613]: Removed session 23. Jan 28 06:58:44.318865 systemd[1]: Started sshd@20-10.230.31.94:22-20.161.92.111:42666.service - OpenSSH per-connection server daemon (20.161.92.111:42666). Jan 28 06:58:44.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.31.94:22-20.161.92.111:42666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:44.872000 audit[5437]: USER_ACCT pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:44.874086 sshd[5437]: Accepted publickey for core from 20.161.92.111 port 42666 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:44.874000 audit[5437]: CRED_ACQ pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:44.875000 audit[5437]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7b29e970 a2=3 a3=0 items=0 ppid=1 pid=5437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:44.875000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:44.877439 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:44.887647 systemd-logind[1613]: New session 24 of user core. Jan 28 06:58:44.898271 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 06:58:44.904000 audit[5437]: USER_START pid=5437 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:44.909000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:44.983440 kubelet[2936]: E0128 06:58:44.982891 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:58:45.314462 sshd[5469]: Connection closed by 20.161.92.111 port 42666 Jan 28 06:58:45.315784 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:45.318000 audit[5437]: USER_END pid=5437 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:45.319000 audit[5437]: CRED_DISP pid=5437 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:45.324614 systemd-logind[1613]: Session 24 logged out. Waiting for processes to exit. Jan 28 06:58:45.325280 systemd[1]: sshd@20-10.230.31.94:22-20.161.92.111:42666.service: Deactivated successfully. Jan 28 06:58:45.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.31.94:22-20.161.92.111:42666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:45.330393 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 06:58:45.334321 systemd-logind[1613]: Removed session 24. Jan 28 06:58:46.979961 kubelet[2936]: E0128 06:58:46.979844 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:58:47.013441 containerd[1638]: time="2026-01-28T06:58:46.982399164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:58:47.325473 containerd[1638]: time="2026-01-28T06:58:47.325383462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:47.327543 containerd[1638]: time="2026-01-28T06:58:47.327485458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:58:47.327688 containerd[1638]: time="2026-01-28T06:58:47.327643727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:47.328120 kubelet[2936]: E0128 06:58:47.328039 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:58:47.328200 kubelet[2936]: E0128 06:58:47.328154 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:58:47.328513 kubelet[2936]: E0128 06:58:47.328443 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:577d02453e684fc79960ed8b50e8d722,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:47.332965 containerd[1638]: time="2026-01-28T06:58:47.332599025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:58:47.647655 containerd[1638]: time="2026-01-28T06:58:47.647460694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:47.658029 containerd[1638]: time="2026-01-28T06:58:47.657974704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:47.658133 containerd[1638]: time="2026-01-28T06:58:47.658033405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:58:47.659277 kubelet[2936]: E0128 06:58:47.658493 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:58:47.659277 kubelet[2936]: E0128 06:58:47.658580 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:58:47.659277 kubelet[2936]: E0128 06:58:47.658812 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5457cf895c-z6ltp_calico-system(7420162d-3e1e-4922-8cd8-19db25c1125f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:47.660102 kubelet[2936]: E0128 06:58:47.660049 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:58:49.978999 containerd[1638]: time="2026-01-28T06:58:49.978899928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:58:50.381095 containerd[1638]: time="2026-01-28T06:58:50.381004457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:50.383466 containerd[1638]: time="2026-01-28T06:58:50.383419031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:58:50.383608 containerd[1638]: time="2026-01-28T06:58:50.383572991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:50.383991 kubelet[2936]: E0128 06:58:50.383904 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:50.384566 kubelet[2936]: E0128 06:58:50.384020 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:50.384566 kubelet[2936]: E0128 06:58:50.384318 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vrfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-bbmbt_calico-apiserver(1023d10f-af49-4cbc-b6ed-d31a2d3bba42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:50.385600 kubelet[2936]: E0128 06:58:50.385541 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:58:50.431507 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 06:58:50.431685 kernel: audit: type=1130 audit(1769583530.421:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.31.94:22-20.161.92.111:42682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:50.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.31.94:22-20.161.92.111:42682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:50.422366 systemd[1]: Started sshd@21-10.230.31.94:22-20.161.92.111:42682.service - OpenSSH per-connection server daemon (20.161.92.111:42682). Jan 28 06:58:50.948522 sshd[5502]: Accepted publickey for core from 20.161.92.111 port 42682 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:50.961193 kernel: audit: type=1101 audit(1769583530.947:883): pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:50.947000 audit[5502]: USER_ACCT pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:50.967190 sshd-session[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:50.977110 kernel: audit: type=1103 audit(1769583530.964:884): pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:50.964000 audit[5502]: CRED_ACQ pid=5502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:50.984986 kernel: audit: type=1006 audit(1769583530.964:885): pid=5502 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 28 06:58:50.964000 audit[5502]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccb2d53c0 a2=3 a3=0 items=0 ppid=1 pid=5502 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:50.995026 kernel: audit: type=1300 audit(1769583530.964:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccb2d53c0 a2=3 a3=0 items=0 ppid=1 pid=5502 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:50.964000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:50.999063 kernel: audit: type=1327 audit(1769583530.964:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:50.999391 systemd-logind[1613]: New session 25 of user core. Jan 28 06:58:51.008331 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 06:58:51.024805 kernel: audit: type=1105 audit(1769583531.016:886): pid=5502 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.016000 audit[5502]: USER_START pid=5502 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.027000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.034981 kernel: audit: type=1103 audit(1769583531.027:887): pid=5506 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.400642 sshd[5506]: Connection closed by 20.161.92.111 port 42682 Jan 28 06:58:51.401255 sshd-session[5502]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:51.419388 kernel: audit: type=1106 audit(1769583531.404:888): pid=5502 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.404000 audit[5502]: USER_END pid=5502 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.419000 audit[5502]: CRED_DISP pid=5502 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.432906 systemd[1]: sshd@21-10.230.31.94:22-20.161.92.111:42682.service: Deactivated successfully. Jan 28 06:58:51.435502 kernel: audit: type=1104 audit(1769583531.419:889): pid=5502 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:51.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.31.94:22-20.161.92.111:42682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:51.441028 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 06:58:51.443856 systemd-logind[1613]: Session 25 logged out. Waiting for processes to exit. Jan 28 06:58:51.447778 systemd-logind[1613]: Removed session 25. Jan 28 06:58:51.977000 audit[5518]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:51.977000 audit[5518]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9efed190 a2=0 a3=7ffd9efed17c items=0 ppid=3048 pid=5518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:51.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:51.982138 containerd[1638]: time="2026-01-28T06:58:51.982038797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:58:51.990000 audit[5518]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:58:51.990000 audit[5518]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd9efed190 a2=0 a3=7ffd9efed17c items=0 ppid=3048 pid=5518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:51.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:58:52.310673 containerd[1638]: time="2026-01-28T06:58:52.310593456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:52.318744 containerd[1638]: time="2026-01-28T06:58:52.318492259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:58:52.318744 containerd[1638]: time="2026-01-28T06:58:52.318684115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:52.319307 kubelet[2936]: E0128 06:58:52.319213 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:58:52.319896 kubelet[2936]: E0128 06:58:52.319334 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:58:52.319896 kubelet[2936]: E0128 06:58:52.319622 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6hr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74fbf496f-6pckk_calico-system(c1237d78-1650-42b9-ac4f-842b943ada74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:52.321660 kubelet[2936]: E0128 06:58:52.320751 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74" Jan 28 06:58:52.982975 containerd[1638]: time="2026-01-28T06:58:52.982839273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:58:53.297204 containerd[1638]: time="2026-01-28T06:58:53.297105117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:53.298565 containerd[1638]: time="2026-01-28T06:58:53.298406207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:58:53.298565 containerd[1638]: time="2026-01-28T06:58:53.298518686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:53.299238 kubelet[2936]: E0128 06:58:53.299155 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:58:53.299462 kubelet[2936]: E0128 06:58:53.299430 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:58:53.300318 kubelet[2936]: E0128 06:58:53.299894 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr5hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9zxml_calico-system(b3b161cf-39ef-4b57-bdfb-9046b0dd729b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:53.301812 kubelet[2936]: E0128 06:58:53.301585 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9zxml" podUID="b3b161cf-39ef-4b57-bdfb-9046b0dd729b" Jan 28 06:58:56.506078 systemd[1]: Started sshd@22-10.230.31.94:22-20.161.92.111:46674.service - OpenSSH per-connection server daemon (20.161.92.111:46674). Jan 28 06:58:56.531083 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 06:58:56.531201 kernel: audit: type=1130 audit(1769583536.505:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.31.94:22-20.161.92.111:46674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:56.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.31.94:22-20.161.92.111:46674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:57.048000 audit[5523]: USER_ACCT pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.060593 kernel: audit: type=1101 audit(1769583537.048:894): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.064639 sshd[5523]: Accepted publickey for core from 20.161.92.111 port 46674 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:58:57.069000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.071911 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:58:57.077153 kernel: audit: type=1103 audit(1769583537.069:895): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.085236 kernel: audit: type=1006 audit(1769583537.069:896): pid=5523 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 06:58:57.069000 audit[5523]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81cc7ce0 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:57.095184 kernel: audit: type=1300 audit(1769583537.069:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81cc7ce0 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:58:57.069000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:57.101071 kernel: audit: type=1327 audit(1769583537.069:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:58:57.101227 systemd-logind[1613]: New session 26 of user core. Jan 28 06:58:57.106292 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 06:58:57.113000 audit[5523]: USER_START pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.123047 kernel: audit: type=1105 audit(1769583537.113:897): pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.122000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.129974 kernel: audit: type=1103 audit(1769583537.122:898): pid=5527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.516700 sshd[5527]: Connection closed by 20.161.92.111 port 46674 Jan 28 06:58:57.516536 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Jan 28 06:58:57.522000 audit[5523]: USER_END pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.542485 kernel: audit: type=1106 audit(1769583537.522:899): pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.536000 audit[5523]: CRED_DISP pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.557516 kernel: audit: type=1104 audit(1769583537.536:900): pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:58:57.560918 systemd[1]: sshd@22-10.230.31.94:22-20.161.92.111:46674.service: Deactivated successfully. Jan 28 06:58:57.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.31.94:22-20.161.92.111:46674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:58:57.567202 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 06:58:57.571231 systemd-logind[1613]: Session 26 logged out. Waiting for processes to exit. Jan 28 06:58:57.574276 systemd-logind[1613]: Removed session 26. Jan 28 06:58:58.984560 containerd[1638]: time="2026-01-28T06:58:58.984263040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:58:59.314982 containerd[1638]: time="2026-01-28T06:58:59.314667834Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:59.316173 containerd[1638]: time="2026-01-28T06:58:59.316112862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:58:59.316503 containerd[1638]: time="2026-01-28T06:58:59.316169159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:59.317044 kubelet[2936]: E0128 06:58:59.316894 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:58:59.318400 kubelet[2936]: E0128 06:58:59.317346 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:58:59.318400 kubelet[2936]: E0128 06:58:59.317798 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:59.320126 containerd[1638]: time="2026-01-28T06:58:59.320079079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:58:59.623982 containerd[1638]: time="2026-01-28T06:58:59.622676796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:59.624986 containerd[1638]: time="2026-01-28T06:58:59.624806650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:58:59.624986 containerd[1638]: time="2026-01-28T06:58:59.624924041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:59.625929 kubelet[2936]: E0128 06:58:59.625239 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:59.625929 kubelet[2936]: E0128 06:58:59.625351 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:58:59.625929 kubelet[2936]: E0128 06:58:59.625681 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf65c897b-gv7nr_calico-apiserver(11f0b3ec-48a9-43d4-ba78-4be405b03a1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:59.627633 kubelet[2936]: E0128 06:58:59.627242 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-gv7nr" podUID="11f0b3ec-48a9-43d4-ba78-4be405b03a1e" Jan 28 06:58:59.628023 containerd[1638]: time="2026-01-28T06:58:59.627363673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:58:59.937400 containerd[1638]: time="2026-01-28T06:58:59.936583099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:58:59.938584 containerd[1638]: time="2026-01-28T06:58:59.938541094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:58:59.938769 containerd[1638]: time="2026-01-28T06:58:59.938668612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:58:59.939548 kubelet[2936]: E0128 06:58:59.939459 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:58:59.939628 kubelet[2936]: E0128 06:58:59.939559 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:58:59.940412 kubelet[2936]: E0128 06:58:59.940201 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hjw4_calico-system(feae4718-ebbe-416f-b2aa-04c3e4a5379c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:58:59.941741 kubelet[2936]: E0128 06:58:59.941679 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hjw4" podUID="feae4718-ebbe-416f-b2aa-04c3e4a5379c" Jan 28 06:59:00.989007 kubelet[2936]: E0128 06:59:00.988093 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf65c897b-bbmbt" podUID="1023d10f-af49-4cbc-b6ed-d31a2d3bba42" Jan 28 06:59:02.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.31.94:22-20.161.92.111:55782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:59:02.627990 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:59:02.628150 kernel: audit: type=1130 audit(1769583542.622:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.31.94:22-20.161.92.111:55782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:59:02.622401 systemd[1]: Started sshd@23-10.230.31.94:22-20.161.92.111:55782.service - OpenSSH per-connection server daemon (20.161.92.111:55782). Jan 28 06:59:02.983495 kubelet[2936]: E0128 06:59:02.983308 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5457cf895c-z6ltp" podUID="7420162d-3e1e-4922-8cd8-19db25c1125f" Jan 28 06:59:03.154000 audit[5538]: USER_ACCT pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.159243 sshd[5538]: Accepted publickey for core from 20.161.92.111 port 55782 ssh2: RSA SHA256:+vz+dXDaO4YrQQxNZdrbq5o/8O/v35dWfxEMt+yl0rQ Jan 28 06:59:03.162995 kernel: audit: type=1101 audit(1769583543.154:903): pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.163294 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:59:03.157000 audit[5538]: CRED_ACQ pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.171072 kernel: audit: type=1103 audit(1769583543.157:904): pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.157000 audit[5538]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff74f2dd30 a2=3 a3=0 items=0 ppid=1 pid=5538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:59:03.176767 kernel: audit: type=1006 audit(1769583543.157:905): pid=5538 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 28 06:59:03.176856 kernel: audit: type=1300 audit(1769583543.157:905): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff74f2dd30 a2=3 a3=0 items=0 ppid=1 pid=5538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:59:03.157000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:59:03.185209 kernel: audit: type=1327 audit(1769583543.157:905): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:59:03.182060 systemd-logind[1613]: New session 27 of user core. Jan 28 06:59:03.190434 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 06:59:03.197000 audit[5538]: USER_START pid=5538 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.215990 kernel: audit: type=1105 audit(1769583543.197:906): pid=5538 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.216227 kernel: audit: type=1103 audit(1769583543.203:907): pid=5542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.203000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.572780 sshd[5542]: Connection closed by 20.161.92.111 port 55782 Jan 28 06:59:03.575213 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Jan 28 06:59:03.577000 audit[5538]: USER_END pid=5538 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.585327 kernel: audit: type=1106 audit(1769583543.577:908): pid=5538 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.588505 systemd[1]: sshd@23-10.230.31.94:22-20.161.92.111:55782.service: Deactivated successfully. Jan 28 06:59:03.588783 systemd-logind[1613]: Session 27 logged out. Waiting for processes to exit. Jan 28 06:59:03.577000 audit[5538]: CRED_DISP pid=5538 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.594448 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 06:59:03.596261 kernel: audit: type=1104 audit(1769583543.577:909): pid=5538 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 06:59:03.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.31.94:22-20.161.92.111:55782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:59:03.635318 systemd-logind[1613]: Removed session 27. Jan 28 06:59:06.982701 kubelet[2936]: E0128 06:59:06.982623 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74fbf496f-6pckk" podUID="c1237d78-1650-42b9-ac4f-842b943ada74"